NLA MasterMath - Weekly Schedule 2018

Week 13

After explaining some features of the Second Matlab Assessment: we concentrated on Harmonic Rayleigh-Ritz, which is nothing else than orthogonalizing eigenpair residuals to W=AV instead of to V. Interestingly, this has an interpretation as Rayleigh-Ritz applied to inv(A) with search space W.

Then we went to study the Jacobi-Davidson method, which is an attempt to improve upon the IRA method. Instead of expanding the current search space in each step by ``the'' residual (which happens in Lanczos/Arnoldi, where the all residuals are linearly dependent) it was proposed to precondition the residual first, i.e. to expand with inv(K)r

The system Kq=r was derived from an old idea by Jacobi and aims to approximate the correction q orthogonal to the current eigenvector approximation v such that v+q is a next and better eigenvector approximation.

Instead of accepting v+q as next eigenvector approximation, q is added to the search space and the Ritz-Galerking approach is applied with that search space, in order for even better approximate eigenvector than v+q to be able to be determined.

The derivation of the orthogonal correction equation (a generalized algebraic Riccati equation) was left as an exercise, as was its direct approximation in a subspace by another Ritz-Galerkin method, which turned out to be a small eigenproblem.

Week 12

Wegens succes geprolongeerd: Test 3 van vorige week, nu met extra aannames! Maak al de opgaven zelf (met boek, aantekeningen en internet erbij, maar zonder andere personen i.h.b. medestudenten te raadplegen) en lever ze morgen op hoorcollege in.

Je eindcijfer voor Test 3 is dan het gemiddelde (afgerond op halven) van de scores van deze en vorige week.

We behandelden de Implicitly Restarted Arnoldi method. This is a way how to change the start vector v of your Arnoldi-factorization without into (A-mu I)v using the matrix A again, but at the cost of a one dimension smaller new Arnoldi factorization.

This filerting technique can be used to remove unwanted eigenvector components from v, or in other words, to enrich v in the wanted components.

Week 11

This week, we start with the last of the three small tests about the material of Weeks 08-09-10. See below for details.

Then we paid attention to the following results in Chapter 4:

  • Theorem 4.3 (the residual of an exact eigenpair in the approximate matrix)
  • Lemma 4.1 (about the Rayleigh quotient of the projected exact eigenvector)
  • Theorem 4.6 (without proof) and its corollary about the eigenvalues)
We skipped Chapter 5, because its essence was already treated in week 9 when we derived the QR-iteration.

Chapter 6 deals with Krylov subspace methods, and repeats many well-known concepts from the linear system setting. We looked at the Hermitian Lanczos and the Arnoldi method and so far avoided the non-Hermitian Lanczos method.

  • Section 6.1 (theorem 6.1)
  • Section 6.2, proposition 6.8, Sect 6.2.2 MGS and HH-Arnoldi
  • Algorithm 6.3 (restart with an enriched start vector)
  • We skipped Sect 6.2.3 and Sect 6.3, 6.4, 6.5
  • Section 6.6 about convergence: Lemma 6.1 and Theorem 6.3
Exercises for this week:
  • 6.1, 6.5, 6.6, 6.7, 6.8

Week 10

This week we looked at deflation. Standard direct deflation is infeasible because it is computationally much too expensive. Instead, we consider
  • The rather simple solution for the normal case
  • Wielandt-Hotelling deflation for the (partly) diagonalizable case
  • Deflation directed at the Schur decomposition
Next, we started with subspace methods for eigenvalue problems. As before, with a general search-subspace, and the orthogonal residual approach.

We introduced and investigated

  • Ritz pairs, projected matrices
  • The interlacing property
Roughly speaking, this is Chapter 4, Sections 4.1, 4.2, 4.3, where we skipped the more technical (though very interesting :-)) results.

Some exercises from Saad to be tried:

  • 4.2,.4.5, 4.9 and 4.10
Furthermore, it may be instructive to implement the Partial Schur Decomposition using RQI to find the eigenvalues and vectors. Try it on a small (say 6x6) matrix. First choose it to be Hermitian, then arbitrary.

Week 9

We reviewed simple vector iterations (Section 4.1) to approximate a single eigenpair of a matrix A, which we assumed to be Hermitian.

We looked at

  • The Power Method (Section 4.1.1)
  • Shift-and-Invert (Section 4.1.2 and 4.2.2)
  • Rayleigh Quotient Iteration (Sect 4.2.2)
but in some more detail than in the book, but only for Hermitian A. Here is a pdf with the details: and the second exercise on this pdf is a good way to study the RQI Finally, we briefly looked at the QR-iteration, using as a source Trefethen & Bau, Lecture 28-29 on the unshifted and shifted QR-iterations.

Week 8

We will start with a short test that will count towards your final grade for the course. The test is about the covered material of Chapters 6, which builds upon the general theory in Chapter 5.

Then, we move on to eigenvalues, using the other book by Saad. After repeating some well-known facts about eigenvalueproblems (see Chapter 1) we discussed

  • Min-Max theorems: Theorem 1.9, 1.10
we moved to Chapter 3 about perturbation theory.

We already encountered most of Section 3.11 and 3.1.2 in the Linear Systems book; we briefly commented on Section 3.1.3 and 3.1.4 and mentioned the example finishing section 3.1.5.

More explicitly we covered:

  • Section 3.2.1 (Bauer-Fike with proof, Th 3.7 without proof)
  • Section 3.2.2 Corollary 3.3, Lemma 3.2, Theorem 3.8, Corollary 3.4
  • Section 3.2.3 Proposition 3.4
We did not cover Section 3.4 but the Gershgorin Theorems are useful to know. The proof resembles the Bauer-Fike proofs very much.

As an exercise, show that Bauer-Fike is sharp for the matrices in Example 3.2. Perturb A with eH for small e. Do this exact for the 2x2 case, and use Matlab to experiment with the nxn case with n=4,5,6,...

Week 7

It is possible to create bi-orthogonal bases for a pair of Krylov spaces for A and A* using short recurrences only. This leads to a new class of methods in which an approximation from K(A,b) is sought whose residual is orthogonal to K(A*,c). Mathematically it is much harder to prove anything about such methods, even though in practice they can be quite effective.

We covered:

  • Chapter 7: Krylov Subspace Methods
  • Section 7.1: Lanczos bi-orthogonalization (without Section 7.1.2)
  • Section 7.2
  • Section 7.3 superficially
This week, the first computational assessment was discussed and questions about it were answered. Deadline for handing in is Tuesday October 30, 23:59.

Week 6

Not only symmetric/Hermitian matrices yield short recurrences for the orthonormal basis for the Krylov Space. The Faber-Manteuffel Theorem shows which other matrices have short recurrences and which are optimal in some sense.

We covered:

  • Chapter 6: Krylov Subspace Methods
  • Section 6.10
  • Section 6.11 (without Section 6.11.2)
  • We did not cover Section 6.12
This week, the first computational assessment was handed out. You wer asked to study it carefully and to prepare questions for next class.

Week 5

This week we looked at the consequences of symmetry of the system matrix A. This implies symmetry of the upper Hessenberg matrix in the Arnoldi method and leads to three-term recurrence relations.

We covered:

  • Chapter 6: Krylov Subspace Methods
  • Section 6.6 (without 6.6.2)
  • Section 6.7 (without Section 6.7.3)
  • Section 6.8
Suggested exercises are:
  • 6.20, 6.21, 6.23 (only for GMRES and FOM)

Week 4

We will start with a short test that will count towards your final grade for the course

We will then cover:

  • Chapter 6: Krylov Subspace Methods
  • Sections 6.1, 6.2, 6.3
  • 6.4 (without 6.4.2)
  • 6.5 (without 6.5.6 and 6.5.8 and 6.5.9)
In particular, we derived in detail:
  • The Full Orthogonalization Method
  • The Generamized Minimal Residual Method
Suggested exercises are:
  • 6.8, 6.9, 6.10, 6.13

Week 3

We covered
  • Chapter 5: Projection Methods
  • Excluding Sect 5.4
The following concepts should be understood:
  • Subspace methods with L=K and L=AK
  • A-inner products and norms
  • A*A-inner product and norms
  • Algorithm Minimal Residuals Sect 5.3.2
  • Algorithm Steepest Descent Sect 5.3.1
Suggested exercises are:
  • 5.1
  • 5.4, 5.8 (related to Ths 5.7 and 5.10)
  • 5.12 (random search direction)
  • 5.10 and 5.11
  • 5.16 and 5.17

Week 2

We covered
  • Chapter 4: Basic Iterative Methods
  • Excluding: 4.1.1: overlapping block methods
  • Exclusing: 4.2.5, 4.3
The following concepts should be understood:
  • splittings of a matrix
  • fixed point iterations for affine transformations
  • convergent if and only if spectral radius of iteration matrix smaller than one
  • errors versus residuals,
  • are powers of a matrix times initial error / residual
  • preconditioning: either to improve condition number or to reduce the spectral radius of iteration matrix
  • sufficient conditions for convergent iterations:
    • regular splittings for inverse nonnegative matrices
    • matrix A strictly diagonally dominant
    • A is SPD
  • basic methods as update & correction algorithms
Suggested exercises are:
  • 4.1, 4.4
  • 4.7 (but `consistently ordered matrices' will not be tested)
Further exercises were given on the blackboard:

1) give N&S conditions for Jacobi and Gauss-Seidel to converge for an arbitrary invertible 2x2 matrix

2) apply the Jacobi method to an upper triangular system

3) implement the random minimal residual method (which selects a random update direction of current approximation and minimizes the residual along that direction)

4) With A = [1 p q ; -1 1 r ; 0 -1 1] give N&S conditions for Gauss-Seidel to converge.

Week 1

We covered
  • Chapter 3: Sparse Matrices
  • Sections 3.1, 3.2, 3.3 only
The following concepts should be understood:
  • adjacency graph
  • row- and column permuations of matrices
  • symmetric permutation of a matrix
  • reordering of equations and unknowns in a linear system
  • LU and Cholesky decomposition depend on reorderings
  • Cuthill McKee and Reverse CMK
  • reoderings based on independent set and multi-colorings
Suggested exercises are:
  • 3.4, 3.5, 3.6, 3.7, 3.8, 3.10, 3.11, 3.12
See alse the pdf-section for Lecture Notes of:
  • Linear Algebra 2
  • Introduction to Numerical Mathematics: SVD
which are (part of) two first-year courses.