• # Paper 1, Section I, B

Prove, from first principles, that there is an algorithm that can determine whether any real symmetric matrix $A \in \mathbb{R}^{n \times n}$ is positive definite or not, with the computational cost (number of arithmetic operations) bounded by $\mathcal{O}\left(n^{3}\right)$.

[Hint: Consider the LDL decomposition.]

comment
• # Paper 1, Section II, B

For the ordinary differential equation

$\boldsymbol{y}^{\prime}=\boldsymbol{f}(t, \boldsymbol{y}), \quad \boldsymbol{y}(0)=\tilde{\boldsymbol{y}}_{0}, \quad t \geqslant 0$

where $\boldsymbol{y}(t) \in \mathbb{R}^{N}$ and the function $\boldsymbol{f}: \mathbb{R} \times \mathbb{R}^{N} \rightarrow \mathbb{R}^{N}$ is analytic, consider an explicit one-step method described as the mapping

$\boldsymbol{y}_{n+1}=\boldsymbol{y}_{n}+h \varphi\left(t_{n}, \boldsymbol{y}_{n}, h\right)$

Here $\varphi: \mathbb{R}_{+} \times \mathbb{R}^{N} \times \mathbb{R}_{+} \rightarrow \mathbb{R}^{N}, n=0,1, \ldots$ and $t_{n}=n h$ with time step $h>0$, producing numerical approximations $\boldsymbol{y}_{n}$ to the exact solution $\boldsymbol{y}\left(t_{n}\right)$ of equation $(*)$, with $\boldsymbol{y}_{0}$ being the initial value of the numerical solution.

(i) Define the local error of a one-step method.

(ii) Let $\|\cdot\|$ be a norm on $\mathbb{R}^{N}$ and suppose that

$\|\boldsymbol{\varphi}(t, \boldsymbol{u}, h)-\boldsymbol{\varphi}(t, \boldsymbol{v}, h)\| \leqslant L\|\boldsymbol{u}-\boldsymbol{v}\|,$

for all $h>0, t \in \mathbb{R}, \boldsymbol{u}, \boldsymbol{v} \in \mathbb{R}^{N}$, where $L$ is some positive constant. Let $t^{*}>0$ be given and $\boldsymbol{e}_{0}=\boldsymbol{y}_{0}-\boldsymbol{y}(0)$ denote the initial error (potentially non-zero). Show that if the local error of the one-step method ( $\uparrow$ ) is $\mathcal{O}\left(h^{p+1}\right)$, then

$\max _{n=0, \ldots,\left\lfloor t^{*} / h\right\rfloor}\left\|\boldsymbol{y}_{n}-\boldsymbol{y}(n h)\right\| \leqslant e^{t^{*} L}\left\|\boldsymbol{e}_{0}\right\|+\mathcal{O}\left(h^{p}\right), \quad h \rightarrow 0$

(iii) Let $N=1$ and consider equation $(*)$ where $f$ is time-independent satisfying $|f(u)-f(v)| \leqslant K|u-v|$ for all $u, v \in \mathbb{R}$, where $K$ is a positive constant. Consider the one-step method given by

$y_{n+1}=y_{n}+\frac{1}{4} h\left(k_{1}+3 k_{2}\right), \quad k_{1}=f\left(y_{n}\right), \quad k_{2}=f\left(y_{n}+\frac{2}{3} h k_{1}\right) .$

Use part (ii) to show that for this method we have that equation (††) holds (with a potentially different constant $L$ ) for $p=2$.

comment
• # Paper 1, Section II, E

Let $A \in \mathbb{R}^{n \times n}$ with $n>2$ and define $\operatorname{Spec}(A)=\{\lambda \in \mathbb{C} \mid A-\lambda I$ is not invertible $\}$.

The QR algorithm for computing $\operatorname{Spec}(A)$ is defined as follows. Set $A_{0}=A$. For $k=0,1, \ldots$ compute the $\mathrm{QR}$ factorization $A_{k}=Q_{k} R_{k}$ and set $A_{k+1}=R_{k} Q_{k}$. (Here $Q_{k}$ is an $n \times n$ orthogonal matrix and $R_{k}$ is an $n \times n$ upper triangular matrix.)

(a) Show that $A_{k+1}$ is related to the original matrix $A$ by the similarity transformation $A_{k+1}=\bar{Q}_{k}^{T} A \bar{Q}_{k}$, where $\bar{Q}_{k}=Q_{0} Q_{1} \cdots Q_{k}$ is orthogonal and $\bar{Q}_{k} \bar{R}_{k}$ is the QR factorization of $A^{k+1}$ with $\bar{R}_{k}=R_{k} R_{k-1} \cdots R_{0}$.

(b) Suppose that $A$ is symmetric and that its eigenvalues satisfy

$\left|\lambda_{1}\right|<\left|\lambda_{2}\right|<\cdots<\left|\lambda_{n-1}\right|=\left|\lambda_{n}\right|$

Suppose, in addition, that the first two canonical basis vectors are given by $\mathbf{e}_{1}=\sum_{i=1}^{n} b_{i} \mathbf{w}_{i}$, $\mathbf{e}_{2}=\sum_{i=1}^{n} c_{i} \mathbf{w}_{i}$, where $b_{i} \neq 0, c_{i} \neq 0$ for $i=1, \ldots, n$ and $\left\{\mathbf{w}_{i}\right\}_{i=1}^{n}$ are the normalised eigenvectors of $A$.

Let $B_{k} \in \mathbb{R}^{2 \times 2}$ be the $2 \times 2$ upper left corner of $A_{k}$. Show that $d_{\mathrm{H}}\left(\operatorname{Spec}\left(B_{k}\right), S\right) \rightarrow 0$ as $k \rightarrow \infty$, where $S=\left\{\lambda_{n}\right\} \cup\left\{\lambda_{n-1}\right\}$ and $d_{\mathrm{H}}$ denotes the Hausdorff metric

$d_{\mathrm{H}}(X, Y)=\max \left\{\sup _{x \in X} \inf _{y \in Y}|x-y|, \sup _{y \in Y} \inf _{x \in X}|x-y|\right\}, \quad X, Y \subset \mathbb{C}$

[Hint: You may use the fact that for real symmetric matrices $U, V$ we have $\left.d_{\mathrm{H}}(\operatorname{Spec}(U), \operatorname{Spec}(V)) \leqslant\|U-V\|_{2} \cdot\right]$

comment
• # Paper 2, Section II, 17B

(a) Define Householder reflections and show that a real Householder reflection is symmetric and orthogonal. Moreover, show that if $H, A \in \mathbb{R}^{n \times n}$, where $H$ is a Householder reflection and $A$ is a full matrix, then the computational cost (number of arithmetic operations) of computing $H A H^{-1}$ can be $\mathcal{O}\left(n^{2}\right)$ operations, as opposed to $\mathcal{O}\left(n^{3}\right)$ for standard matrix products.

(b) Show that for any $A \in \mathbb{R}^{n \times n}$ there exists an orthogonal matrix $Q \in \mathbb{R}^{n \times n}$ such that

$Q A Q^{T}=T=\left[\begin{array}{ccccc} t_{1,1} & t_{1,2} & t_{1,3} & \cdots & t_{1, n} \\ t_{2,1} & t_{2,2} & t_{2,3} & \cdots & t_{2, n} \\ 0 & t_{3,2} & t_{3,3} & \cdots & t_{3, n} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & t_{n, n-1} & t_{n, n} \end{array}\right]$

In particular, $T$ has zero entries below the first subdiagonal. Show that one can compute such a $T$ and $Q$ (they may not be unique) using $\mathcal{O}\left(n^{3}\right)$ arithmetic operations.

[Hint: Multiply A from the left and right with Householder reflections.]

comment
• # Paper 2, Section II, 41E

(a) Let $\mathbf{x} \in \mathbb{R}^{N}$ and define $\mathbf{y} \in \mathbb{R}^{2 N}$ by

$y_{n}= \begin{cases}x_{n}, & 0 \leqslant n \leqslant N-1 \\ x_{2 N-n-1}, & N \leqslant n \leqslant 2 N-1\end{cases}$

Let $\mathbf{Y} \in \mathbb{C}^{2 N}$ be defined as the discrete Fourier transform (DFT) of $\mathbf{y}$, i.e.

$Y_{k}=\sum_{n=0}^{2 N-1} y_{n} \omega_{2 N}^{n k}, \quad \omega_{2 N}=\exp (-\pi i / N), \quad 0 \leqslant k \leqslant 2 N-1$

Show that

$Y_{k}=2 \omega_{2 N}^{-k / 2} \sum_{n=0}^{N-1} x_{n} \cos \left[\frac{\pi}{N}\left(n+\frac{1}{2}\right) k\right], \quad 0 \leqslant k \leqslant 2 N-1$

(b) Define the discrete cosine transform $(\mathrm{DCT}) \mathcal{C}_{N}: \mathbb{R}^{N} \rightarrow \mathbb{R}^{N}$ by

$\mathbf{z}=\mathcal{C}_{N} \mathbf{x}, \text { where } z_{k}=\sum_{n=0}^{N-1} x_{n} \cos \left[\frac{\pi}{N}\left(n+\frac{1}{2}\right) k\right], \quad k=0, \ldots, N-1$

For $N=2^{p}$ with $p \in \mathbb{N}$, show that, similar to the Fast Fourier Transform (FFT), there exists an algorithm that computes the DCT of a vector of length $N$, where the number of multiplications required is bounded by $C N \log N$, where $C$ is some constant independent of $N$.

[You may not assume that the FFT algorithm requires $\mathcal{O}(N \log N)$ multiplications to compute the DFT of a vector of length $N$. If you use this, you must prove it.]

comment
• # Paper 3, Section II, 40E

Consider discretisation of the diffusion equation

$\frac{\partial u}{\partial t}=\frac{\partial^{2} u}{\partial x^{2}}, \quad 0 \leqslant t \leqslant 1$

by the Crank-Nicholson method:

$u_{m}^{n+1}-\frac{1}{2} \mu\left(u_{m-1}^{n+1}-2 u_{m}^{n+1}+u_{m+1}^{n+1}\right)=u_{m}^{n}+\frac{1}{2} \mu\left(u_{m-1}^{n}-2 u_{m}^{n}+u_{m+1}^{n}\right), \quad n=0, \ldots, N,$

where $\mu=\frac{k}{h^{2}}$ is the Courant number, $h$ is the step size in the space discretisation, $k=\frac{1}{N+1}$ is the step size in the time discretisation, and $u_{m}^{n} \approx u(m h, n k)$, where $u(x, t)$ is the solution of $(*)$. The initial condition $u(x, 0)=u_{0}(x)$ is given.

(a) Consider the Cauchy problem for $(*)$ on the whole line, $x \in \mathbb{R}$ (thus $m \in \mathbb{Z}$ ), and derive the formula for the amplification factor of the Crank-Nicholson method ( $\dagger$ ). Use the amplification factor to show that the Crank-Nicholson method is stable for the Cauchy problem for all $\mu>0$.

[You may quote basic properties of the Fourier transform mentioned in lectures, but not the theorem on sufficient and necessary conditions on the amplification factor to have stability.]

(b) Consider $(*)$ on the interval $0 \leqslant x \leqslant 1$ (thus $m=1, \ldots, M$ and $h=\frac{1}{M+1}$ ) with Dirichlet boundary conditions $u(0, t)=\phi_{0}(t)$ and $u(1, t)=\phi_{1}(t)$, for some sufficiently smooth functions $\phi_{0}$ and $\phi_{1}$. Show directly (without using the Lax equivalence theorem) that, given sufficient smoothness of $u$, the Crank-Nicholson method is convergent, for any $\mu>0$, in the norm defined by $\|\boldsymbol{\eta}\|_{2, h}=\left(h \sum_{m=1}^{M}\left|\eta_{m}\right|^{2}\right)^{1 / 2}$ for $\boldsymbol{\eta} \in \mathbb{R}^{M}$.

[You may assume that the Trapezoidal method has local order 3 , and that the standard three-point centred discretisation of the second derivative (as used in the CrankNicholson method) has local order 2.]

comment
• # Paper 3, Section II, B

The functions $p_{0}, p_{1}, p_{2}, \ldots$ are generated by the formula

$p_{n}(x)=(-1)^{n} x^{-1 / 2} e^{x} \frac{d^{n}}{d x^{n}}\left(x^{n+1 / 2} e^{-x}\right), \quad 0 \leqslant x<\infty$

(a) Show that $p_{n}(x)$ is a monic polynomial of degree $n$. Write down the explicit forms of $p_{0}(x), p_{1}(x), p_{2}(x)$.

(b) Demonstrate the orthogonality of these polynomials with respect to the scalar product

$\langle f, g\rangle=\int_{0}^{\infty} x^{1 / 2} e^{-x} f(x) g(x) d x$

i.e. that $\left\langle p_{n}, p_{m}\right\rangle=0$ for $m \neq n$, and show that

$\left\langle p_{n}, p_{n}\right\rangle=n ! \Gamma\left(n+\frac{3}{2}\right)$

where $\Gamma(y)=\int_{0}^{\infty} x^{y-1} e^{-x} d x$.

(c) Assuming that a three-term recurrence relation in the form

$p_{n+1}(x)=\left(x-\alpha_{n}\right) p_{n}(x)-\beta_{n} p_{n-1}(x), \quad n=1,2, \ldots$

holds, find the explicit expressions for $\alpha_{n}$ and $\beta_{n}$ as functions of $n$.

[Hint: you may use the fact that $\Gamma(y+1)=y \Gamma(y) .]$

comment
• # Paper 4 , Section II, 40E

(a) Show that if $A$ and $B$ are real matrices such that both $A$ and $A-B-B^{T}$ are symmetric positive definite, then the spectral radius of $H=-(A-B)^{-1} B$ is strictly less than $1 .$

(b) Consider the Poisson equation $\nabla^{2} u=f$ (with zero Dirichlet boundary condition) on the unit square, where $f$ is some smooth function. Given $m \in \mathbb{N}$ and an equidistant grid on the unit square with stepsize $h=1 /(m+1)$, the standard five-point method is given by

$u_{i-1, j}+u_{i+1, j}+u_{i, j-1}+u_{i, j+1}-4 u_{i, j}=h^{2} f_{i, j}, \quad i, j=1, \ldots, m$

where $f_{i, j}=f(i h, j h)$ and $u_{0, j}=u_{m+1, j}=u_{i, 0}=u_{i, m+1}=0$. Equation $(*)$ can be written as a linear system $A x=b$, where $A \in \mathbb{R}^{m^{2} \times m^{2}}$ and $b \in \mathbb{R}^{m^{2}}$ both depend on the chosen ordering of the grid points.

Use the result in part (a) to show that the Gauss-Seidel method converges for the linear system $A x=b$ described above, regardless of the choice of ordering of the grid points.

[You may quote convergence results - based on the spectral radius of the iteration matrix - mentioned in the lecture notes.]

comment
• # Paper 4, Section I, B

(a) Given the data $f(0)=0, f(1)=4, f(2)=2, f(3)=8$, find the interpolating cubic polynomial $p_{3} \in \mathbb{P}_{3}[x]$ in the Newton form.

(b) We add to the data one more value, $f(-2)=10$. Find the interpolating quartic polynomial $p_{4} \in \mathbb{P}_{4}[x]$ for the extended data in the Newton form.

comment

• # Paper 1, Section I, C

(a) Find an $L U$ factorisation of the matrix

$A=\left[\begin{array}{cccc} 1 & 1 & 0 & 3 \\ 0 & 2 & 2 & 12 \\ 0 & 5 & 7 & 32 \\ 3 & -1 & -1 & -10 \end{array}\right]$

where the diagonal elements of $L$ are $L_{11}=L_{44}=1, L_{22}=L_{33}=2$.

(b) Use this factorisation to solve the linear system $A \mathbf{x}=\mathbf{b}$, where

$\mathbf{b}=\left[\begin{array}{c} -3 \\ -12 \\ -30 \\ 13 \end{array}\right]$

comment
• # Paper 1, Section II, C

(a) Given a set of $n+1$ distinct real points $x_{0}, x_{1}, \ldots, x_{n}$ and real numbers $f_{0}, f_{1}, \ldots, f_{n}$, show that the interpolating polynomial $p_{n} \in \mathbb{P}_{n}[x], p_{n}\left(x_{i}\right)=f_{i}$, can be written in the form

$p_{n}(x)=\sum_{k=0}^{n} a_{k} \prod_{j=0, j \neq k}^{n} \frac{x-x_{j}}{x_{k}-x_{j}}, \quad x \in \mathbb{R}$

where the coefficients $a_{k}$ are to be determined.

(b) Consider the approximation of the integral of a function $f \in C[a, b]$ by a finite sum,

$\int_{a}^{b} f(x) d x \approx \sum_{k=0}^{s-1} w_{k} f\left(c_{k}\right)$

where the weights $w_{0}, \ldots, w_{s-1}$ and nodes $c_{0}, \ldots, c_{s-1} \in[a, b]$ are independent of $f$. Derive the expressions for the weights $w_{k}$ that make the approximation ( 1$)$ exact for $f$ being any polynomial of degree $s-1$, i.e. $f \in \mathbb{P}_{s-1}[x]$.

Show that by choosing $c_{0}, \ldots, c_{s-1}$ to be zeros of the polynomial $q_{s}(x)$ of degree $s$, one of a sequence of orthogonal polynomials defined with respect to the scalar product

$\langle u, v\rangle=\int_{a}^{b} u(x) v(x) d x$

the approximation (1) becomes exact for $f \in \mathbb{P}_{2 s-1}[x]$ (i.e. for all polynomials of degree $2 s-1)$.

(c) On the interval $[a, b]=[-1,1]$ the scalar product (2) generates orthogonal polynomials given by

$q_{n}(x)=\frac{1}{2^{n} n !} \frac{d^{n}}{d x^{n}}\left(x^{2}-1\right)^{n}, \quad n=0,1,2, \ldots$

Find the values of the nodes $c_{k}$ for which the approximation (1) is exact for all polynomials of degree 7 (i.e. $f \in \mathbb{P}_{7}[x]$ ) but no higher.

comment
• # Paper 1, Section II, E

Let $A \in \mathbb{R}^{n \times n}$ be a real symmetric matrix with distinct eigenvalues $\lambda_{1}<\lambda_{2}<\cdots<$ $\lambda_{n}$ and a corresponding orthonormal basis of real eigenvectors $\left\{\boldsymbol{w}_{i}\right\}_{i=1}^{n}$. Given a unit norm vector $\boldsymbol{x}^{(0)} \in \mathbb{R}^{n}$, and a set of parameters $s_{k} \in \mathbb{R}$, consider the inverse iteration algorithm

$\left(A-s_{k} I\right) \boldsymbol{y}=\boldsymbol{x}^{(k)}, \quad \boldsymbol{x}^{(k+1)}=\boldsymbol{y} /\|\boldsymbol{y}\|, \quad k \geqslant 0 .$

(a) Let $s_{k}=s=$ const for all $k$. Assuming that $\boldsymbol{x}^{(0)}=\sum_{i=1}^{n} c_{i} \boldsymbol{w}_{i}$ with all $c_{i} \neq 0$, prove that

$s<\lambda_{1} \Rightarrow \boldsymbol{x}^{(k)} \rightarrow \boldsymbol{w}_{1} \quad \text { or } \quad \boldsymbol{x}^{(k)} \rightarrow-\boldsymbol{w}_{1} \quad(k \rightarrow \infty) .$

Explain briefly what happens to $\boldsymbol{x}^{(k)}$ when $\lambda_{m} for some $m \in\{1,2, \ldots, n-1\}$, and when $\lambda_{n}.

(b) Let $s_{k}=\left(A \boldsymbol{x}^{(k)}, \boldsymbol{x}^{(k)}\right)$ for $k \geqslant 0$. Assuming that, for some $k$, some $a_{i} \in \mathbb{R}$ and for a small $\epsilon$,

$\boldsymbol{x}^{(k)}=c^{-1}\left(\boldsymbol{w}_{1}+\epsilon \sum_{i \geqslant 2} a_{i} \boldsymbol{w}_{i}\right)$

where $c$ is the appropriate normalising constant. Show that $s_{k}=\lambda_{1}-K \epsilon^{2}+\mathcal{O}\left(\epsilon^{4}\right)$ and determine the value of $K$. Hence show that

$\boldsymbol{x}^{(k+1)}=c_{1}^{-1}\left(\boldsymbol{w}_{1}+\epsilon^{3} \sum_{i \geqslant 2} b_{i} \boldsymbol{w}_{i}+\mathcal{O}\left(\epsilon^{5}\right)\right)$

where $c_{1}$ is the appropriate normalising constant, and find expressions for $b_{i}$.

comment
• # Paper 2, Section II, 40E

(a) For $A \in \mathbb{R}^{n \times n}$ and nonzero $\boldsymbol{v} \in \mathbb{R}^{n}$, define the $m$-th Krylov subspace $K_{m}(A, \boldsymbol{v})$ of $\mathbb{R}^{n}$. Prove that if $A$ has $n$ linearly independent eigenvectors with at most $s$ distinct eigenvalues, then

$\operatorname{dim} K_{m}(A, \boldsymbol{v}) \leqslant s \quad \forall m$

(b) Define the term residual in the conjugate gradient (CG) method for solving a system $A \boldsymbol{x}=\boldsymbol{b}$ with a symmetric positive definite $A$. State two properties of the method regarding residuals and their connection to certain Krylov subspaces, and hence show that, for any right-hand side $\boldsymbol{b}$, the method finds the exact solution after at most $s$ iterations, where $s$ is the number of distinct eigenvalues of $A$.

(c) The preconditioned CG-method $P A P^{T} \widehat{\boldsymbol{x}}=P \boldsymbol{b}$ is applied for solving $A \boldsymbol{x}=\boldsymbol{b}$, with

Prove that the method finds the exact solution after two iterations at most.

(d) Prove that, for any symmetric positive definite $A$, we can find a preconditioner $P$ such that the preconditioned CG-method for solving $A \boldsymbol{x}=\boldsymbol{b}$ would require only one step. Explain why this preconditioning is of hardly any use.

comment
• # Paper 2, Section II, C

Consider a multistep method for numerical solution of the differential equation $\mathbf{y}^{\prime}=\mathbf{f}(t, \mathbf{y})$ :

$\mathbf{y}_{n+2}-\mathbf{y}_{n+1}=h\left[(1+\alpha) \mathbf{f}\left(t_{n+2}, \mathbf{y}_{n+2}\right)+\beta \mathbf{f}\left(t_{n+1}, \mathbf{y}_{n+1}\right)-(\alpha+\beta) \mathbf{f}\left(t_{n}, \mathbf{y}_{n}\right)\right],$

where $n=0,1, \ldots$, and $\alpha$ and $\beta$ are constants.

(a) Define the order of a method for numerically solving an ODE.

(b) Show that in general an explicit method of the form $(*)$ has order 1 . Determine the values of $\alpha$ and $\beta$ for which this multistep method is of order 3 .

(c) Show that the multistep method (*) is convergent.

comment
• # Paper 3, Section II, 40E

(a) Give the definition of a normal matrix. Prove that if $A$ is normal, then the (Euclidean) matrix $\ell_{2}$-norm of $A$ is equal to its spectral radius, i.e., $\|A\|_{2}=\rho(A)$.

$u_{t}=u_{x}, \quad 0 \leqslant x \leqslant 1, \quad 0 \leqslant t<\infty$

is discretized by the Crank-Nicolson scheme

$u_{m}^{n+1}-u_{m}^{n}=\frac{1}{4} \mu\left(u_{m+1}^{n+1}-u_{m-1}^{n+1}\right)+\frac{1}{4} \mu\left(u_{m+1}^{n}-u_{m-1}^{n}\right), \quad m=1,2, \ldots, M, \quad n \in \mathbb{Z}_{+}$

Here, $\mu=\frac{k}{h}$ is the Courant number, with $k=\Delta t, h=\Delta x=\frac{1}{M+1}$, and $u_{m}^{n}$ is an approximation to $u(m h, n k)$.

Using the eigenvalue analysis and carefully justifying each step, determine conditions on $\mu>0$ for which the method is stable. [Hint: All M $\times M$ Toeplitz anti-symmetric tridiagonal (TAT) matrices have the same set of orthogonal eigenvectors, and a TAT matrix with the elements $a_{j, j}=a$ and $a_{j, j+1}=-a_{j, j-1}=b$ has the eigenvalues $\lambda_{k}=a+2 \mathrm{i} b \cos \frac{\pi k}{M+1}$ where $\mathrm{i}=\sqrt{-1}$. ]

(c) Consider the same advection equation for the Cauchy problem $(x \in \mathbb{R}, 0 \leqslant t \leqslant$ $T)$. Now it is discretized by the two-step leapfrog scheme

$u_{m}^{n+1}=\mu\left(u_{m+1}^{n}-u_{m-1}^{n}\right)+u_{m}^{n-1} .$

Applying the Fourier technique, find the range of $\mu>0$ for which the method is stable.

comment
• # Paper 4 , Section II, 40E

(a) For a function $f=f(x, y)$ which is real analytic in $\mathbb{R}^{2}$ and 2-periodic in each variable, its Fourier expansion is given by the formula

$f(x, y)=\sum_{m, n \in \mathbb{Z}} \widehat{f}_{m, n} e^{i \pi m x+i \pi n y}, \quad \widehat{f}_{m, n}=\frac{1}{4} \int_{-1}^{1} \int_{-1}^{1} f(t, \theta) e^{-i \pi m t-i \pi n \theta} d t d \theta$

Derive expressions for the Fourier coefficients of partial derivatives $f_{x}, f_{y}$ and those of the product $h(x, y)=f(x, y) g(x, y)$ in terms of $\widehat{f}_{m, n}$ and $\widehat{g}_{m, n}$.

(b) Let $u(x, y)$ be the 2-periodic solution in $\mathbb{R}^{2}$ of the general second-order elliptic PDE

$\left(a u_{x}\right)_{x}+\left(a u_{y}\right)_{y}=f$

where $a$ and $f$ are both real analytic and 2-periodic, and $a(x, y)>0$. We impose the normalisation condition $\int_{-1}^{1} \int_{-1}^{1} u d x d y=0$ and note from the PDE $\int_{-1}^{1} \int_{-1}^{1} f d x d y=0$.

Construct explicitly the infinite-dimensional linear algebraic system that arises from the application of the Fourier spectral method to the above equation, and explain how to truncate this system to a finite-dimensional one.

(c) Specify the truncated system for the unknowns $\left\{\widehat{u}_{m, n}\right\}$ for the case

$a(x, y)=5+2 \cos \pi x+2 \cos \pi y$

and prove that, for any ordering of the Fourier coefficients $\left\{\widehat{u}_{m, n}\right\}$ into one-dimensional array, the resulting system is symmetric and positive definite. [You may use the Gershgorin theorem without proof.]

comment

• # Paper 1, Section I, C

Let $[a, b]$ be the smallest interval that contains the $n+1$ distinct real numbers $x_{0}, x_{1}, \ldots, x_{n}$, and let $f$ be a continuous function on that interval.

Define the divided difference $f\left[x_{0}, x_{1}, \ldots, x_{m}\right]$ of degree $m \leqslant n$.

Prove that the polynomial of degree $n$ that interpolates the function $f$ at the points $x_{0}, x_{1}, \ldots, x_{n}$ is equal to the Newton polynomial

$p_{n}(x)=f\left[x_{0}\right]+f\left[x_{0}, x_{1}\right]\left(x-x_{0}\right)+\cdots+f\left[x_{0}, x_{1}, \ldots, x_{n}\right] \prod_{i=0}^{n-1}\left(x-x_{i}\right)$

Prove the recursive formula

$f\left[x_{0}, x_{1}, \ldots, x_{m}\right]=\frac{f\left[x_{1}, x_{2}, \ldots, x_{m}\right]-f\left[x_{0}, x_{1}, \ldots, x_{m-1}\right]}{x_{m}-x_{0}}$

for $1 \leqslant m \leqslant n .$

comment
• # Paper 1, Section II, C

(a) Describe the Jacobi method for solving a system of linear equations $A \boldsymbol{x}=\boldsymbol{b}$ as a particular case of splitting, and state the criterion for its convergence in terms of the iteration matrix.

(b) For the case when

$A=\left[\begin{array}{lll} 1 & \alpha & \alpha \\ \alpha & 1 & \alpha \\ \alpha & \alpha & 1 \end{array}\right]$

find the exact range of the parameter $\alpha$ for which the Jacobi method converges.

(c) State the Householder-John theorem and deduce that the Jacobi method converges if $A$ is a symmetric positive-definite tridiagonal matrix.

comment
• # Paper 1, Section II, C

(a) An $s$-step method for solving the ordinary differential equation

$\frac{d \mathbf{y}}{d t}=\mathbf{f}(t, \mathbf{y})$

is given by

$\sum_{l=0}^{s} \rho_{l} \mathbf{y}_{n+l}=h \sum_{l=0}^{s} \sigma_{l} \mathbf{f}\left(t_{n+l}, \mathbf{y}_{n+l}\right), \quad n=0,1, \ldots$

where $\rho_{l}$ and $\sigma_{l}(l=0,1, \ldots, s)$ are constant coefficients, with $\rho_{s}=1$, and $h$ is the time-step. Prove that the method is of order $p \geqslant 1$ if and only if

$\rho\left(e^{z}\right)-z \sigma\left(e^{z}\right)=O\left(z^{p+1}\right)$

as $z \rightarrow 0$, where

$\rho(w)=\sum_{l=0}^{s} \rho_{l} w^{l}, \quad \sigma(w)=\sum_{l=0}^{s} \sigma_{l} w^{l}$

(b) Show that the Adams-Moulton method

$\mathbf{y}_{n+2}=\mathbf{y}_{n+1}+\frac{h}{12}\left(5 \mathbf{f}\left(t_{n+2}, \mathbf{y}_{n+2}\right)+8 \mathbf{f}\left(t_{n+1}, \mathbf{y}_{n+1}\right)-\mathbf{f}\left(t_{n}, \mathbf{y}_{n}\right)\right)$

is of third order and convergent.

[You may assume the Dahlquist equivalence theorem if you state it clearly.]

comment
• # Paper 2, Section II, C

The Poisson equation on the unit square, equipped with zero boundary conditions, is discretized with the 9-point scheme:

\begin{aligned} &-\frac{10}{3} u_{i, j}+\frac{2}{3}\left(u_{i+1, j}+u_{i-1, j}+u_{i, j+1}+u_{i, j-1}\right) \\ &+\frac{1}{6}\left(u_{i+1, j+1}+u_{i+1, j-1}+u_{i-1, j+1}+u_{i-1, j-1}\right)=h^{2} f_{i, j} \end{aligned}

where $1 \leqslant i, j \leqslant m, u_{i, j} \approx u(i h, j h)$, and $(i h, j h)$ are the grid points with $h=\frac{1}{m+1}$. We also assume that $u_{0, j}=u_{i, 0}=u_{m+1, j}=u_{i, m+1}=0$.

(a) Prove that all $m \times m$ tridiagonal symmetric Toeplitz (TST-) matrices

$H=[\beta, \alpha, \beta]:=\left[\begin{array}{cccc} \alpha & \beta & & \\ \beta & \alpha & \ddots & \\ & \ddots & \ddots & \beta \\ & & \beta & \alpha \end{array}\right] \in \mathbb{R}^{m \times m}$

share the same eigenvectors $\boldsymbol{q}_{k}$ with the components $(\sin j k \pi h)_{j=1}^{m}$ for $k=1, \ldots, m$. Find expressions for the corresponding eigenvalues $\lambda_{k}$ for $k=1, \ldots, m$. Deduce that $H=Q D Q^{-1}$, where $D=\operatorname{diag}\left\{\lambda_{k}\right\}$ and $Q$ is the matrix $(\sin i j \pi h)_{i, j=1}^{m} .$

(b) Show that, by arranging the grid points ( $i h, j h)$ into a one-dimensional array by columns, the 9 -points scheme results in the following system of linear equations of the form

$A \boldsymbol{u}=\boldsymbol{b} \Leftrightarrow\left[\begin{array}{cccc} B & C & & \\ C & B & \ddots & \\ & \ddots & \ddots & C \\ & & C & B \end{array}\right]\left[\begin{array}{c} \boldsymbol{u}_{1} \\ \boldsymbol{u}_{2} \\ \vdots \\ \boldsymbol{u}_{m} \end{array}\right]=\left[\begin{array}{c} \boldsymbol{b}_{1} \\ \boldsymbol{b}_{2} \\ \vdots \\ \boldsymbol{b}_{m} \end{array}\right]$

where $A \in \mathbb{R}^{m^{2} \times m^{2}}$, the vectors $\boldsymbol{u}_{k}, \boldsymbol{b}_{k} \in \mathbb{R}^{m}$ are portions of $\boldsymbol{u}, \boldsymbol{b} \in \mathbb{R}^{m^{2}}$, respectively, and $B, C$ are $m \times m$ TST-matrices whose elements you should determine.

(c) Using $\boldsymbol{v}_{k}=Q^{-1} \boldsymbol{u}_{k}, \boldsymbol{c}_{k}=Q^{-1} \boldsymbol{b}_{k}$, show that (2) is equivalent to

$\left[\begin{array}{cccc} D & E & & \\ E & D & \ddots & \\ & \ddots & \ddots & E \\ & & E & D \end{array}\right]\left[\begin{array}{c} \boldsymbol{v}_{1} \\ \boldsymbol{v}_{2} \\ \vdots \\ \boldsymbol{v}_{m} \end{array}\right]=\left[\begin{array}{c} \boldsymbol{c}_{1} \\ \boldsymbol{c}_{2} \\ \vdots \\ \boldsymbol{c}_{m} \end{array}\right]$

where $D$ and $E$ are diagonal matrices.

(d) Show that, by appropriate reordering of the grid, the system (3) is reduced to $m$ uncoupled $m \times m$ systems of the form

$\Lambda_{k} \widehat{v}_{k}=\widehat{c}_{k}, \quad k=1, \ldots, m$

Determine the elements of the matrices $\Lambda_{k}$.

comment
• # Paper 2, Section II, C

Define the linear least squares problem for the equation

$A \mathbf{x}=\mathbf{b}$

where $A$ is a given $m \times n$ matrix with $m>n, \mathbf{b} \in \mathbb{R}^{m}$ is a given vector and $\mathbf{x} \in \mathbb{R}^{n}$ is an unknown vector.

Explain how the linear least squares problem can be solved by obtaining a $Q R$ factorization of the matrix $A$, where $Q$ is an orthogonal $m \times m$ matrix and $R$ is an uppertriangular $m \times n$ matrix in standard form.

Use the Gram-Schmidt method to obtain a $Q R$ factorization of the matrix

$A=\left(\begin{array}{lll} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right)$

and use it to solve the linear least squares problem in the case

$\mathbf{b}=\left(\begin{array}{l} 1 \\ 2 \\ 3 \\ 6 \end{array}\right)$

comment
• # Paper 3, Section II, 40C

The diffusion equation

$u_{t}=u_{x x}, \quad 0 \leqslant x \leqslant 1, \quad t \geqslant 0,$

with the initial condition $u(x, 0)=\phi(x), 0 \leqslant x \leqslant 1$, and boundary conditions $u(0, t)=$ $u(1, t)=0$, is discretised by $u_{m}^{n} \approx u(m h, n k)$ with $k=\Delta t, h=\Delta x=1 /(1+M)$. The Courant number is given by $\mu=k / h^{2}$.

(a) The system is solved numerically by the method

$u_{m}^{n+1}=u_{m}^{n}+\mu\left(u_{m-1}^{n}-2 u_{m}^{n}+u_{m+1}^{n}\right), \quad m=1,2, \ldots, M, \quad n \geqslant 0 .$

Prove directly that $\mu \leqslant 1 / 2$ implies convergence.

(b) Now consider the method

$a u_{m}^{n+1}-\frac{1}{4}(\mu-c)\left(u_{m-1}^{n+1}-2 u_{m}^{n+1}+u_{m+1}^{n+1}\right)=a u_{m}^{n}+\frac{1}{4}(\mu+c)\left(u_{m-1}^{n}-2 u_{m}^{n}+u_{m+1}^{n}\right)$

where $a$ and $c$ are real constants. Using an eigenvalue analysis and carefully justifying each step, determine conditions on $\mu, a$ and $c$ for this method to be stable.

[You may use the notation $[\beta, \alpha, \beta]$ for the tridiagonal matrix with $\alpha$ along the diagonal, and $\beta$ along the sub-and super-diagonals and use without proof any relevant theorems about such matrices.]

comment
• # Paper 3, Section II, C

(a) Let $w(x)$ be a positive weight function on the interval $[a, b]$. Show that

$\langle f, g\rangle=\int_{a}^{b} f(x) g(x) w(x) d x$

defines an inner product on $C[a, b]$.

(b) Consider the sequence of polynomials $p_{n}(x)$ defined by the three-term recurrence relation

$p_{n+1}(x)=\left(x-\alpha_{n}\right) p_{n}(x)-\beta_{n} p_{n-1}(x), \quad n=1,2, \ldots$

where

$p_{0}(x)=1, \quad p_{1}(x)=x-\alpha_{0},$

and the coefficients $\alpha_{n}$ (for $\left.n \geqslant 0\right)$ and $\beta_{n}$ (for $\left.n \geqslant 1\right)$ are given by

$\alpha_{n}=\frac{\left\langle p_{n}, x p_{n}\right\rangle}{\left\langle p_{n}, p_{n}\right\rangle}, \quad \beta_{n}=\frac{\left\langle p_{n}, p_{n}\right\rangle}{\left\langle p_{n-1}, p_{n-1}\right\rangle}$

Prove that this defines a sequence of monic orthogonal polynomials on $[a, b]$.

(c) The Hermite polynomials $H e_{n}(x)$ are orthogonal on the interval $(-\infty, \infty)$ with weight function $e^{-x^{2} / 2}$. Given that

$H e_{n}(x)=(-1)^{n} e^{x^{2} / 2} \frac{d^{n}}{d x^{n}}\left(e^{-x^{2} / 2}\right)$

deduce that the Hermite polynomials satisfy a relation of the form $(*)$ with $\alpha_{n}=0$ and $\beta_{n}=n$. Show that $\left\langle H e_{n}, H e_{n}\right\rangle=n ! \sqrt{2 \pi}$.

(d) State, without proof, how the properties of the Hermite polynomial $\operatorname{He}_{N}(x)$, for some positive integer $N$, can be used to estimate the integral

$\int_{-\infty}^{\infty} f(x) e^{-x^{2} / 2} d x$

where $f(x)$ is a given function, by the method of Gaussian quadrature. For which polynomials is the quadrature formula exact?

comment
• # Paper 4, Section I, C

Calculate the $L U$ factorization of the matrix

$A=\left(\begin{array}{rrrr} 3 & 2 & -3 & -3 \\ 6 & 3 & -7 & -8 \\ 3 & 1 & -6 & -4 \\ -6 & -3 & 9 & 6 \end{array}\right)$

Use this to evaluate $\operatorname{det}(A)$ and to solve the equation

$A \mathbf{x}=\mathbf{b}$

with

$\mathbf{b}=\left(\begin{array}{r} 3 \\ 3 \\ -1 \\ -3 \end{array}\right)$

comment
• # Paper 4, Section II, C

For a 2-periodic analytic function $f$, its Fourier expansion is given by the formula

$f(x)=\sum_{n=-\infty}^{\infty} \widehat{f}_{n} e^{i \pi n x}, \quad \widehat{f}_{n}=\frac{1}{2} \int_{-1}^{1} f(t) e^{-i \pi n t} d t$

(a) Consider the two-point boundary value problem

$-\frac{1}{\pi^{2}}(1+2 \cos \pi x) u^{\prime \prime}+u=1+\sum_{n=1}^{\infty} \frac{2}{n^{2}+1} \cos \pi n x, \quad-1 \leqslant x \leqslant 1,$

with periodic boundary conditions $u(-1)=u(1)$. Construct explicitly the infinite dimensional linear algebraic system that arises from the application of the Fourier spectral method to the above equation, and explain how to truncate the system to a finitedimensional one.

(b) A rectangle rule is applied to computing the integral of a 2-periodic analytic function $h$ :

$\int_{-1}^{1} h(t) d t \approx \frac{2}{N} \sum_{k=-N / 2+1}^{N / 2} h\left(\frac{2 k}{N}\right)$

Find an expression for the error $e_{N}(h):=\mathrm{LHS}-\mathrm{RHS}$ of $(*)$, in terms of $\widehat{h}_{n}$, and show that $e_{N}(h)$ has a spectral rate of decay as $N \rightarrow \infty$. [In the last part, you may quote a relevant theorem about $\widehat{h}_{n}$.]

comment

• # Paper 1, Section I, D

The Trapezoidal Rule for solving the differential equation

$y^{\prime}(t)=f(t, y), \quad t \in[0, T], \quad y(0)=y_{0}$

is defined by

$y_{n+1}=y_{n}+\frac{1}{2} h\left[f\left(t_{n}, y_{n}\right)+f\left(t_{n+1}, y_{n+1}\right)\right]$

where $h=t_{n+1}-t_{n}$.

Determine the minimum order of convergence $k$ of this rule for general functions $f$ that are sufficiently differentiable. Show with an explicit example that there is a function $f$ for which the local truncation error is $A h^{k+1}$ for some constant $A$.

comment
• # Paper 1, Section II, D

Show that if $\mathbf{u} \in \mathbb{R}^{m} \backslash\{\mathbf{0}\}$ then the $m \times m$