Language:EN
Pages: 14
Rating : ⭐⭐⭐⭐⭐
Price: $10.99
Page 1 Preview
see lecture notes pts for each missing line bar

See lecture notes pts for each missing line bar

COMP 361/5611, Elementary Numerical

Methods

Tristan Glatard

Concordia University
Department of Computer Science and Software Engineering

No cell phones, laptops or any other electronic devices except ENCS calcu- lators.

This exam is 13 pages long, including the cover page. It has 12 questions labeled from Q1 to Q12. Check that your copy is complete.

Q1 - By evaluating the determinant, determine if the following matrix is sin-gular, ill-conditioned, or well-conditioned:

A =1 1 7 4 6 2 1 9 4
Solution:

Q2 - Use Gauss elimination to solve the equations Ax = b, where:

A = 

3 6 14 3 6 7 1 3 4

b =

Grading:

A = LU =

3 8 1

5 9 3
2 1 0

1pt for the method (no partial mark).

1pt for the numerical result (partial marks if some iterations are correct).

2 0 0 0 1 1

0 0 4

x2 + x3 = 5 ⇒ x2 = 7
2x1 = 1 ⇒ x1 = 1/2
Grading:

1pt for the method (no partial mark).

Solution:
We use Lagrange’s formula, which we retrieve as follows: it’s a linear combination of polynomials li where li is 0 on all the xj for j ̸= i and 1 at xi. The linear combination is weighted by the yi to respect the interpolation condition:

P(x) = y0

(x − x1)(x − x2)
(x0 − x1)(x0 − x2) + y1

(x − x0)(x − x1)
(x2 − x0)(x2 − x1)

Grading:

P(0.5) =0.5 (1.5) 2 + 0.5 (0.5) = 1/8

def linear_regression(x_data, y_data):
’’’
Returns a tuple (a, b) representing the straight line of equation y = a + bx that fits, in the least-squares sense, the points represented by arrays x_data and y_data.

’’’

2/3 pts for each missing line: y bar, b, a. Partial marks for approximate answers for b or a.

7

Grading:

1pt for method (no partial marks).

c =af(b) − bf(a)
f(b) − f(a)

This can be retrieved from the equation of the straight line passing by (a, f(a)) and (b, f(b)).

1pt for method (partial marks if the formula “looks good”).

1pt for numerical result (partial marks at each iteration).

(Remember, f’ is in the denominator, “we have a problem when f’(x) is 0”.) The formula can be retrieved by approximating f by the straight line of slope f’(x) passing by (x, f(x)), and finding xi+1 where this line crosses y = 0.

Here we have: f′(x) = 1 2sin(x/2).

Accuracy < 0.01, method converged.
Grading:

diff is the derivative of the f.

init_x is the initial estimate.

11

Solution:
As in the lecture notes.

x

0.1

0.2

0.5
f(x) 0.01 0.04 0.09 0.16 0.25

f′(x) =f(x + h) − f(x − h) 2h + O(h2)

This expression is easy to remember as it is the slope of the straight line passing by (x-h, f(x-h)) and (x+h, f(x+h)). It can also be retrieved by subtracting the Taylor development of f in (x+h) and in (x-h).

To check our result, we note from the data that f(x) = 2x, which gives f’(x)=2x (approximation in O(h2) or even O(h) are exact in this case). Grading:

1pt for method (partial marks if the formula “looks good”). 1pt for result (no partial marks)

forward differences instead.

Taylor development of f in (x+h) and in (x+2h): We recall that the formula for forward differences in O(h2) is based on the

f(x + 2h) 4f(x + h) = f(x) 4f(x) + 2hf′(x) 4hf′(x) + . . .

It gives:

1pt for method (partial marks if the formula “looks good”).

1pt for result (no partial marks)

You are viewing 1/3rd of the document.Purchase the document to get full access instantly

Immediately available after payment
Both online and downloadable
No strings attached
How It Works
Login account
Login Your Account
Place in cart
Add to Cart
send in the money
Make payment
Document download
Download File
img

Uploaded by : Paul Rivers

PageId: DOCC85F62E