NLA - exercises 2017

Week 12: Lecture 30

We considered the Jacobi method for approximating eigenvalues, and the bisection method.

Furthermore, and to be continued, we sketched the idea of Divide and Conquer.


30.1, 30.4, 30.6

Exam 2 from 2014, Exercise 4 (interlacing singular values?)

And two QR-related exercises:

Exam 2 from 2014, Exercise 2 (the ``nonsymmetric QR-lemma'')

Exam 2 from 2015, Exercise 3 (shifting with an eigenvalue).

Handing in to be graded:

Exam 2014 Ex 4 and Exam 2015 Ex 3.

Week 11: Lecture 29

This week we consider the full QR-algorithm, including shifts.

We prove an elegant lemma with a rather surprising interpre- tation. As a consequence, we consider the RQI shift and the Wilkinson shift.

Exercises: Third Matlab Exercise = T&B Exercise 29.1:

  • (a,c,d,e) as indicated in the book
  • (b) use Givens rotations for the QR-factorization
Your main program is called MyQR(A,option1,option2), where
  • option1=0 means no shifts
  • option1=1 means RQ shifts
  • option1=2 means Wilkinson shifts
  • option2=0 means no graphical output
  • option2=1 means show the saw-tooth plot (see part (c))
All sub-functions used are in the same file.

Please send this file by Sunday November 26, 23:59 to Jan. In your e-mail, include the discussion as asked for in part (e).

To better understand the mentioned lemma,

Try this in Matlab

Week 10: Lectures 27, 28

After finishing Lecture 27 we consider the QR-iteration as introduced in Lecture 28. We restrict ourselves to real symmetric matrices. It is shown that it is mathematically equivalent to Simultaneous Iteration (also called Block Power Method) and we study its convergence.

Exercises: 27.2 (I do not know an elementary proof of 27.2 (a)!), 28.1, 28.2, 28.3.

Exercise 1 from Tweede Deeltentamen 2015 (on Rayleigh Quotient Iteration).

Handing in to be graded:

27.2(b) and Exercise 1 (a,b,c,d) from Tweede Deeltentamen 2015.

Week 09: Lectures 24, 25, 26, 27

Lecture 24 reviews all basics about eigenvalues and eigenvectors, including diagonizability and the Schur decomposition. In Lectures 25-26, the two-phase structure present in a number of algorithms is eplained.

Lecture 27 contains the first algorithms: the Power Method, Inverse Iteration, and Rayleigh Quotient Iteration. So far we have only seen the Power Method.

The following exercises are, in fact, important perturbation theorems:

  • 24.2: Gershgorin's Circle Theorem
  • 26.3: The Bauer-Fike Theorem
  • 26.1: Pseudospectra
Remaining exercises of interest are 24.1, 24.4, 25.1, 25.3.

Handing in to be graded:

25.1, 25.3, 26.1

Week 08: Lectures 20, 21, 22, 23

In these four lectures we look at Gaussian elimination from the computational point of view.

We look at the algorithm, at so-called pivoting to enhance stability, and we re-examine our definition of stability based on a gruelling example.

As a special case, look (again) at Cholesky factorization.

Exercises: 20.1, 20.2, 20.3, 21.1, 22.1, 23.1.

Handing in to be graded:

21.1, 22.1, 23.1

Week 06: Lects 16, 17

In today's lecture we study the stability of two algorithms:

(1) Householder reflection QR-factorization
(2) Back substitution for upper triangular systems

The first will be done on a global level, the second in detail.

Try exercises 16.1, 17.1, 17.2, Exam 2013 Ex 3 (c)(d)

Handing in to be graded:

Exam 2013 Ex 3 (c)(d).

Week 05: Lects 12, 14, 15

We encountered conditioning of problems, and stability of algorihms to solve these problems.

A backward stable algorithm applied to a well-conditioned problem results in an accurately computed solution (see Th 15.1).

Backward stability is well-illustrated at the beginning of Lect 15: subtracting two numbers is backward stable; bad results in case they are almost equal are caused by ill-conditioning of the mathematical problem of subtraction, and not by the algorithm.

Try exercises 12.1, 14.1, 14.2, 15.1, 15.2.

Apart from that, Ex. 3(a)(b) from Exam 1, 2013 (next week you continue with (c) and (d)) and Ex. 2 from Exam 1, 2015.

Handing in to be graded: Ex. 3(a)(b) from Exam 1, 2013.

Week 04: Lects 11-12

We summarized our analysis of Least-Squares problems and algorithms to solve them:
  • Normal equations (Gauss) with Cholesky decomposition
  • QR-decomposition based methods
  • SVD based method
The QR-decomposition based methods have several choices for the implementation of the orthogonalization. The SVD based method is recommended in case the problem is (close to) rank-deficient.

Then we moved to the concept of conditioning of a problem. We defined absolute and relative condition numbers and gave intuition behind their definition.

We showed that catastrophic cancellation is not (just) due to finite precision arithmetic, but due to inherent ill-conditioning of subtraction.

We finished by considering the conditioning of roots of polynomials in terms of perturbation of their coefficients.

Multiple roots cause serious problems, but also single roots may be very sensitive to perturbations.

Because of Matlab Exercise 1, here will be no additional exercises.

Week 03: Lects 9-11, and 31,23-superficially

We recap the triangular orthogonalization of Lecture 8, to contrast it to the orthogonal triangularization of Lecture 10.

Specifically, we define Householder reflections, and QR-decomposition using these.

Also, we use Givens Rotations (Ex. 10.4) to do the same.

We also show that both these orthogonal tranformations can be employed to factorize a given matrix A as


where U and V are unitary, and B is bidiagonal (see also Lecture 31).

Finally we go through Lecture 11 on Least Squares Problems. We will briefly comment on the Cholesky decomposition, which is mentioned in Lecture 11 and treated in detail in Lecture 23.

It would be good to try the following old exam questions, being:

  • 2013: Exercise 2
  • 2014: Exercise 1 (a)(c)(d)(e)
  • 2015: Exercise 1 (a)-(f)
You can find these exams under ``what to do for the exam''.

Please also look at Exercises 10.1 and 11.1 from T&B. Handing in to be graded:

Exercise 1 (a)-(f) from the Exam of 2015.

Deadline: the beginning of the lecture of October 4.

Week 02: Lects 13, 6-8

We consider some aspects of Finite Precision Arithmetic (FPA).

Try to make exercises 13.1, 13.2, 13.3.

Exercise 13.3 shows that the order of computations matters!

In this light we continued with studying the difference between the Classical Gram Schmidt (CGS) algorithm with its modified version (MGS). In exact arithmetic, they are the same.

Lecture 6 on projections should cause no difficulties. Exercises 6.1 to 6.4 are easy. Try also the more difficult 6.5.

Lecture 7 discusses CGS, and the QR-decomposition (unitary times upper triangular) of an arbitrary matrix. Two related examples discuss CGS in a space of polynomials, and solving Ax=b using QR-decomposition.

Try Exercises 7.1 and 7.5

Lecture 8 introduces MGS and the interpretation of both CGS and MGS as triangular orthogonalization (to be discussed next week).

Additional exercises:

Use both CGS and MGS to orthonormalize the vectors (1,0.001,0.001), (1,0.001,0), (1,0,0.001) while computing in a decimal, three-digit idealized finite precision arithmetic.

Finally, implement QR-decomposition using both CGS and MGS in two ways: one in which the factor R of A=QR is computed column by column, and one row-by-row.

Handing in to be graded:

Exercises 6.5 and 13.2

Deadline: the beginning of next lecture (not after!)

Week 01: Lects 1-5

Please, carefully read the rather basic Lectures 1-5 of Trefethen and Bau (B&T).

Then, look at the exercises belonging to them, and concentrate on the ones which seem difficult (there should not be too many!)

Try as many of these difficult ones as you can in the time you have, while also starting up Matlab and playing around with it a bit to renew your feel for it.

Matlab may also help you to solve some of the problems!

Since the first week's exercises only review prerequisites, you can not score homework points for them. It is however possible to hand in problematic ones in order to receive feedback.

See alse the pdf-section for Lecture Notes of:

  • Linear Algebra 2
  • Introduction to Numerical Mathematics: SVD
which are (part of) two first-year courses.