Paper 2, Section II, 40E

Numerical Analysis | Part II, 2020

(a) For ARn×nA \in \mathbb{R}^{n \times n} and nonzero vRn\boldsymbol{v} \in \mathbb{R}^{n}, define the mm-th Krylov subspace Km(A,v)K_{m}(A, \boldsymbol{v}) of Rn\mathbb{R}^{n}. Prove that if AA has nn linearly independent eigenvectors with at most ss distinct eigenvalues, then

dimKm(A,v)sm\operatorname{dim} K_{m}(A, \boldsymbol{v}) \leqslant s \quad \forall m

(b) Define the term residual in the conjugate gradient (CG) method for solving a system Ax=bA \boldsymbol{x}=\boldsymbol{b} with a symmetric positive definite AA. State two properties of the method regarding residuals and their connection to certain Krylov subspaces, and hence show that, for any right-hand side b\boldsymbol{b}, the method finds the exact solution after at most ss iterations, where ss is the number of distinct eigenvalues of AA.

(c) The preconditioned CG-method PAPTx^=PbP A P^{T} \widehat{\boldsymbol{x}}=P \boldsymbol{b} is applied for solving Ax=bA \boldsymbol{x}=\boldsymbol{b}, with

Prove that the method finds the exact solution after two iterations at most.

(d) Prove that, for any symmetric positive definite AA, we can find a preconditioner PP such that the preconditioned CG-method for solving Ax=bA \boldsymbol{x}=\boldsymbol{b} would require only one step. Explain why this preconditioning is of hardly any use.

Typos? Please submit corrections to this page on GitHub.