Part IB, 2020

# Part IB, 2020

### Jump to course

Paper 1, Section II, E

commentState what it means for a function $f: \mathbb{R}^{m} \rightarrow \mathbb{R}^{r}$ to be differentiable at a point $x \in \mathbb{R}^{m}$, and define its derivative $f^{\prime}(x) .$

Let $\mathcal{M}_{n}$ be the vector space of $n \times n$ real-valued matrices, and let $p: \mathcal{M}_{n} \rightarrow \mathcal{M}_{n}$ be given by $p(A)=A^{3}-3 A-I$. Show that $p$ is differentiable at any $A \in \mathcal{M}_{n}$, and calculate its derivative.

State the inverse function theorem for a function $f$. In the case when $f(0)=0$ and $f^{\prime}(0)=I$, prove the existence of a continuous local inverse function in a neighbourhood of 0 . [The rest of the proof of the inverse function theorem is not expected.]

Show that there exists a positive $\epsilon$ such that there is a continuously differentiable function $q: D_{\epsilon}(I) \rightarrow \mathcal{M}_{n}$ such that $p \circ q=\left.\mathrm{id}\right|_{D_{\epsilon}(I)}$. Is it possible to find a continuously differentiable inverse to $p$ on the whole of $\mathcal{M}_{n}$ ? Justify your answer.

Paper 2, Section I, $2 E$

commentLet $\tau$ be the collection of subsets of $\mathbb{C}$ of the form $\mathbb{C} \backslash f^{-1}(0)$, where $f$ is an arbitrary complex polynomial. Show that $\tau$ is a topology on $\mathbb{C}$.

Given topological spaces $X$ and $Y$, define the product topology on $X \times Y$. Equip $\mathbb{C}^{2}$ with the topology given by the product of $(\mathbb{C}, \tau)$ with itself. Let $g$ be an arbitrary two-variable complex polynomial. Is the subset $\mathbb{C}^{2} \backslash g^{-1}(0)$ always open in this topology? Justify your answer.

Paper 2, Section II, E

commentLet $C[0,1]$ be the space of continuous real-valued functions on $[0,1]$, and let $d_{1}, d_{\infty}$ be the metrics on it given by

$d_{1}(f, g)=\int_{0}^{1}|f(x)-g(x)| d x \quad \text { and } \quad d_{\infty}(f, g)=\max _{x \in[0,1]}|f(x)-g(x)|$

Show that id : $\left(C[0,1], d_{\infty}\right) \rightarrow\left(C[0,1], d_{1}\right)$ is a continuous map. Do $d_{1}$ and $d_{\infty}$ induce the same topology on $C[0,1]$ ? Justify your answer.

Let $d$ denote for any $m \in \mathbb{N}$ the uniform metric on $\mathbb{R}^{m}: d\left(\left(x_{i}\right),\left(y_{i}\right)\right)=\max _{i}\left|x_{i}-y_{i}\right|$. Let $\mathcal{P}_{n} \subset C[0,1]$ be the subspace of real polynomials of degree at most $n$. Define a Lipschitz map between two metric spaces, and show that evaluation at a point gives a Lipschitz map $\left(C[0,1], d_{\infty}\right) \rightarrow(\mathbb{R}, d)$. Hence or otherwise find a bijection from $\left(\mathcal{P}_{n}, d_{\infty}\right)$ to $\left(\mathbb{R}^{n+1}, d\right)$ which is Lipschitz and has a Lipschitz inverse.

Let $\tilde{\mathcal{P}}_{n} \subset \mathcal{P}_{n}$ be the subset of polynomials with values in the range $[-1,1]$.

(i) Show that $\left(\tilde{\mathcal{P}}_{n}, d_{\infty}\right)$ is compact.

(ii) Show that $d_{1}$ and $d_{\infty}$ induce the same topology on $\tilde{\mathcal{P}}_{n}$.

Any theorems that you use should be clearly stated.

[You may use the fact that for distinct constants $a_{i}$, the following matrix is invertible:

$\left(\begin{array}{ccccc} 1 & a_{0} & a_{0}^{2} & \ldots & a_{0}^{n} \\ 1 & a_{1} & a_{1}^{2} & \ldots & a_{1}^{n} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & a_{n} & a_{n}^{2} & \ldots & a_{n}^{n} \end{array}\right)$

Paper 1, Section I, G

commentLet $D$ be the open disc with centre $e^{2 \pi i / 6}$ and radius 1 , and let $L$ be the open lower half plane. Starting with a suitable Möbius map, find a conformal equivalence (or conformal bijection) of $D \cap L$ onto the open unit disc.

Paper 1, Section II, G

commentLet $\ell(z)$ be an analytic branch of $\log z$ on a domain $D \subset \mathbb{C} \backslash\{0\}$. Write down an analytic branch of $z^{1 / 2}$ on $D$. Show that if $\psi_{1}(z)$ and $\psi_{2}(z)$ are two analytic branches of $z^{1 / 2}$ on $D$, then either $\psi_{1}(z)=\psi_{2}(z)$ for all $z \in D$ or $\psi_{1}(z)=-\psi_{2}(z)$ for all $z \in D$.

Describe the principal value or branch $\sigma_{1}(z)$ of $z^{1 / 2}$ on $D_{1}=\mathbb{C} \backslash\{x \in \mathbb{R}: x \leqslant 0\}$. Describe a branch $\sigma_{2}(z)$ of $z^{1 / 2}$ on $D_{2}=\mathbb{C} \backslash\{x \in \mathbb{R}: x \geqslant 0\}$.

Construct an analytic branch $\varphi(z)$ of $\sqrt{1-z^{2}}$ on $\mathbb{C} \backslash\{x \in \mathbb{R}:-1 \leqslant x \leqslant 1\}$ with $\varphi(2 i)=\sqrt{5}$. [If you choose to use $\sigma_{1}$ and $\sigma_{2}$ in your construction, then you may assume without proof that they are analytic.]

Show that for $0<|z|<1$ we have $\varphi(1 / z)=-i \sigma_{1}\left(1-z^{2}\right) / z$. Hence find the first three terms of the Laurent series of $\varphi(1 / z)$ about 0 .

Set $f(z)=\varphi(z) /\left(1+z^{2}\right)$ for $|z|>1$ and $g(z)=f(1 / z) / z^{2}$ for $0<|z|<1$. Compute the residue of $g$ at 0 and use it to compute the integral

$\int_{|z|=2} f(z) d z$

Paper 2, Section II, B

commentFor the function

$f(z)=\frac{1}{z(z-2)}$

find the Laurent expansions

(i) about $z=0$ in the annulus $0<|z|<2$,

(ii) about $z=0$ in the annulus $2<|z|<\infty$,

(iii) about $z=1$ in the annulus $0<|z-1|<1$.

What is the nature of the singularity of $f$, if any, at $z=0, z=\infty$ and $z=1$ ?

Using an integral of $f$, or otherwise, evaluate

$\int_{0}^{2 \pi} \frac{2-\cos \theta}{5-4 \cos \theta} d \theta$

Paper 1, Section II, D

commentWrite down the electric potential due to a point charge $Q$ at the origin.

A dipole consists of a charge $Q$ at the origin, and a charge $-Q$ at position $-\mathbf{d}$. Show that, at large distances, the electric potential due to such a dipole is given by

$\Phi(\mathbf{x})=\frac{1}{4 \pi \epsilon_{0}} \frac{\mathbf{p} \cdot \mathbf{x}}{|\mathbf{x}|^{3}}$

where $\mathbf{p}=Q \mathbf{d}$ is the dipole moment. Hence show that the potential energy between two dipoles $\mathbf{p}_{1}$ and $\mathbf{p}_{2}$, with separation $\mathbf{r}$, where $|\mathbf{r}| \gg|\mathbf{d}|$, is

$U=\frac{1}{8 \pi \epsilon_{0}}\left(\frac{\mathbf{p}_{1} \cdot \mathbf{p}_{2}}{r^{3}}-\frac{3\left(\mathbf{p}_{1} \cdot \mathbf{r}\right)\left(\mathbf{p}_{2} \cdot \mathbf{r}\right)}{r^{5}}\right)$

Dipoles are arranged on an infinite chessboard so that they make an angle $\theta$ with the horizontal in an alternating pattern as shown in the figure. Compute the energy between a given dipole and its four nearest neighbours, and show that this is independent of $\theta$.

Paper 2, Section I, D

commentTwo concentric spherical shells with radii $R$ and $2 R$ carry fixed, uniformly distributed charges $Q_{1}$ and $Q_{2}$ respectively. Find the electric field and electric potential at all points in space. Calculate the total energy of the electric field.

Paper 2, Section II, D

comment(a) A surface current $\mathbf{K}=K \mathbf{e}_{x}$, with $K$ a constant and $\mathbf{e}_{x}$ the unit vector in the $x$-direction, lies in the plane $z=0$. Use Ampère's law to determine the magnetic field above and below the plane. Confirm that the magnetic field is discontinuous across the surface, with the discontinuity given by

$\lim _{z \rightarrow 0^{+}} \mathbf{e}_{z} \times \mathbf{B}-\lim _{z \rightarrow 0^{-}} \mathbf{e}_{z} \times \mathbf{B}=\mu_{0} \mathbf{K}$

where $\mathbf{e}_{z}$ is the unit vector in the $z$-direction.

(b) A surface current $\mathbf{K}$ flows radially in the $z=0$ plane, resulting in a pile-up of charge $Q$ at the origin, with $d Q / d t=I$, where $I$ is a constant.

Write down the electric field $\mathbf{E}$ due to the charge at the origin, and hence the displacement current $\epsilon_{0} \partial \mathbf{E} / \partial t$.

Confirm that, away from the plane and for $\theta<\pi / 2$, the magnetic field due to the displacement current is given by

$\mathbf{B}(r, \theta)=\frac{\mu_{0} I}{4 \pi r} \tan \left(\frac{\theta}{2}\right) \mathbf{e}_{\phi}$

where $(r, \theta, \phi)$ are the usual spherical polar coordinates. [Hint: Use Stokes' theorem applied to a spherical cap that subtends an angle $\theta$.]

Paper 1, Section II, C

commentSteady two-dimensional potential flow of an incompressible fluid is confined to the wedge $0<\theta<\alpha$, where $(r, \theta)$ are polar coordinates centred on the vertex of the wedge and $0<\alpha<\pi$.

(a) Show that a velocity potential $\phi$ of the form

$\phi(r, \theta)=A r^{\gamma} \cos (\lambda \theta),$

where $A, \gamma$ and $\lambda$ are positive constants, satisfies the condition of incompressible flow, provided that $\gamma$ and $\lambda$ satisfy a certain relation to be determined.

Assuming that $u_{\theta}$, the $\theta$-component of velocity, does not change sign within the wedge, determine the values of $\gamma$ and $\lambda$ by using the boundary conditions.

(b) Calculate the shape of the streamlines of this flow, labelling them by the distance $r_{\min }$ of closest approach to the vertex. Sketch the streamlines.

(c) Show that the speed $|\mathbf{u}|$ and pressure $p$ are independent of $\theta$. Assuming that at some radius $r=r_{0}$ the speed and pressure are $u_{0}$ and $p_{0}$, respectively, find the pressure difference in the flow between the vertex of the wedge and $r_{0}$.

[Hint: In polar coordinates $(r, \theta)$,

$\nabla f=\left(\frac{\partial f}{\partial r}, \frac{1}{r} \frac{\partial f}{\partial \theta}\right) \quad \text { and } \quad \nabla \cdot \mathbf{F}=\frac{1}{r} \frac{\partial}{\partial r}\left(r F_{r}\right)+\frac{1}{r} \frac{\partial F_{\theta}}{\partial \theta}$

for a scalar $f$ and a vector $\mathbf{F}=\left(F_{r}, F_{\theta}\right)$.]

Paper 2, Section I, C

commentIncompressible fluid of constant viscosity $\mu$ is confined to the region $0<y<h$ between two parallel rigid plates. Consider two parallel viscous flows: flow A is driven by the motion of one plate in the $x$-direction with the other plate at rest; flow B is driven by a constant pressure gradient in the $x$-direction with both plates at rest. The velocity mid-way between the plates is the same for both flows.

The viscous friction in these flows is known to produce heat locally at a rate

$Q=\mu\left(\frac{\partial u}{\partial y}\right)^{2}$

per unit volume, where $u$ is the $x$-component of the velocity. Determine the ratio of the total rate of heat production in flow A to that in flow B.

Paper 2, Section II, C

commentA vertical cylindrical container of radius $R$ is partly filled with fluid of constant density to depth $h$. The free surface is perturbed so that the fluid occupies the region

$0<r<R, \quad-h<z<\zeta(r, \theta, t)$

where $(r, \theta, z)$ are cylindrical coordinates and $\zeta$ is the perturbed height of the free surface. For small perturbations, a linearised description of surface waves in the cylinder yields the following system of equations for $\zeta$ and the velocity potential $\phi$ :

$\begin{aligned} \nabla^{2} \phi &=0, \quad 0<r<R, \quad-h<z<0 \\ \frac{\partial \phi}{\partial t}+g \zeta &=0 \quad \text { on } \quad z=0 \\ \frac{\partial \zeta}{\partial t}-\frac{\partial \phi}{\partial z} &=0 \quad \text { on } \quad z=0 \\ \frac{\partial \phi}{\partial z} &=0 \quad \text { on } \quad z=-h \\ \frac{\partial \phi}{\partial r} &=0 \quad \text { on } \quad r=R \end{aligned}$

(a) Describe briefly the physical meaning of each equation.

(b) Consider axisymmetric normal modes of the form

$\phi=\operatorname{Re}\left(\hat{\phi}(r, z) e^{-i \sigma t}\right), \quad \zeta=\operatorname{Re}\left(\hat{\zeta}(r) e^{-i \sigma t}\right)$

Show that the system of equations $(1)-(5)$ admits a solution for $\hat{\phi}$ of the form

$\hat{\phi}(r, z)=A J_{0}\left(k_{n} r\right) Z(z)$

where $A$ is an arbitrary amplitude, $J_{0}(x)$ satisfies the equation

$\frac{d^{2} J_{0}}{d x^{2}}+\frac{1}{x} \frac{d J_{0}}{d x}+J_{0}=0$

the wavenumber $k_{n}, n=1,2, \ldots$ is such that $x_{n}=k_{n} R$ is one of the zeros of the function $d J_{0} / d x$, and the function $Z(z)$ should be determined explicitly.

(c) Show that the frequency $\sigma_{n}$ of the $n$-th mode is given by

$\sigma_{n}^{2}=\frac{g}{h} \Psi\left(k_{n} h\right)$

where the function $\Psi(x)$ is to be determined.

[Hint: In cylindrical coordinates $(r, \theta, z)$,

$\left.\nabla^{2}=\frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial}{\partial r}\right)+\frac{1}{r^{2}} \frac{\partial^{2}}{\partial \theta^{2}}+\frac{\partial^{2}}{\partial z^{2}} \cdot\right]$

Paper 1, Section I, E

commentDefine the Gauss map of a smooth embedded surface. Consider the surface of revolution $S$ with points

$\left(\begin{array}{c} (2+\cos v) \cos u \\ (2+\cos v) \sin u \\ \sin v \end{array}\right) \in \mathbb{R}^{3}$

for $u, v \in[0,2 \pi]$. Let $f$ be the Gauss map of $S$. Describe $f$ on the $\{y=0\}$ cross-section of $S$, and use this to write down an explicit formula for $f$.

Let $U$ be the upper hemisphere of the 2 -sphere $S^{2}$, and $K$ the Gauss curvature of $S$. Calculate $\int_{f^{-1}(U)} K d A$.

Paper 1, Section II, E

commentLet $\mathcal{C}$ be the curve in the $(x, z)$-plane defined by the equation

$\left(x^{2}-1\right)^{2}+\left(z^{2}-1\right)^{2}=5 .$

Sketch $\mathcal{C}$, taking care with inflection points.

Let $S$ be the surface of revolution in $\mathbb{R}^{3}$ given by spinning $\mathcal{C}$ about the $z$-axis. Write down an equation defining $S$. Stating any result you use, show that $S$ is a smooth embedded surface.

Let $r$ be the radial coordinate on the $(x, y)$-plane. Show that the Gauss curvature of $S$ vanishes when $r=1$. Are these the only points at which the Gauss curvature of $S$ vanishes? Briefly justify your answer.

Paper 2, Section II, F

commentLet $H=\{z=x+i y \in \mathbb{C}: y>0\}$ be the hyperbolic half-plane with the metric $g_{H}=\left(d x^{2}+d y^{2}\right) / y^{2}$. Define the length of a continuously differentiable curve in $H$ with respect to $g_{H}$.

What are the hyperbolic lines in $H$ ? Show that for any two distinct points $z, w$ in $H$, the infimum $\rho(z, w)$ of the lengths (with respect to $g_{H}$ ) of curves from $z$ to $w$ is attained by the segment $[z, w]$ of the hyperbolic line with an appropriate parameterisation.

The 'hyperbolic Pythagoras theorem' asserts that if a hyperbolic triangle $A B C$ has angle $\pi / 2$ at $C$ then

$\cosh c=\cosh a \cosh b,$

where $a, b, c$ are the lengths of the sides $B C, A C, A B$, respectively.

Let $l$ and $m$ be two hyperbolic lines in $H$ such that

$\inf \{\rho(z, w): z \in l, w \in m\}=d>0$

Prove that the distance $d$ is attained by the points of intersection with a hyperbolic line $h$ that meets each of $l, m$ orthogonally. Give an example of two hyperbolic lines $l$ and $m$ such that the infimum of $\rho(z, w)$ is not attained by any $z \in l, w \in m$.

[You may assume that every Möbius transformation that maps H onto itself is an isometry of $\left.g_{H} \cdot\right]$

Paper 1, Section II, G

commentState the structure theorem for a finitely generated module $M$ over a Euclidean domain $R$ in terms of invariant factors.

Let $V$ be a finite-dimensional vector space over a field $F$ and let $\alpha: V \rightarrow V$ be a linear map. Let $V_{\alpha}$ denote the $F[X]$-module $V$ with $X$ acting as $\alpha$. Apply the structure theorem to $V_{\alpha}$ to show the existence of a basis of $V$ with respect to which $\alpha$ has the rational canonical form. Prove that the minimal polynomial and the characteristic polynomial of $\alpha$ can be expressed in terms of the invariant factors. [Hint: For the characteristic polynomial apply suitable row operations.] Deduce the Cayley-Hamilton theorem for $\alpha$.

Now assume that $\alpha$ has matrix $\left(a_{i j}\right)$ with respect to the basis $v_{1}, \ldots, v_{n}$ of $V$. Let $M$ be the free $F[X]$-module of rank $n$ with free basis $m_{1}, \ldots, m_{n}$ and let $\theta: M \rightarrow V_{\alpha}$ be the unique homomorphism with $\theta\left(m_{i}\right)=v_{i}$ for $1 \leqslant i \leqslant n$. Using the fact, which you need not prove, that ker $\theta$ is generated by the elements $X m_{i}-\sum_{j=1}^{n} a_{j i} m_{j}, 1 \leqslant i \leqslant n$, find the invariant factors of $V_{\alpha}$ in the case that $V=\mathbb{R}^{3}$ and $\alpha$ is represented by the real matrix

$\left(\begin{array}{ccc} 0 & 1 & 0 \\ -4 & 4 & 0 \\ -2 & 1 & 2 \end{array}\right)$

with respect to the standard basis.

Paper 2, Section I, G

commentAssume a group $G$ acts transitively on a set $\Omega$ and that the size of $\Omega$ is a prime number. Let $H$ be a normal subgroup of $G$ that acts non-trivially on $\Omega$.

Show that any two $H$-orbits of $\Omega$ have the same size. Deduce that the action of $H$ on $\Omega$ is transitive.

Let $\alpha \in \Omega$ and let $G_{\alpha}$ denote the stabiliser of $\alpha$ in $G$. Show that if $H \cap G_{\alpha}$ is trivial, then there is a bijection $\theta: H \rightarrow \Omega$ under which the action of $G_{\alpha}$ on $H$ by conjugation corresponds to the action of $G_{\alpha}$ on $\Omega$.

Paper 2, Section II, G

commentState Gauss' lemma. State and prove Eisenstein's criterion.

Define the notion of an algebraic integer. Show that if $\alpha$ is an algebraic integer, then $\{f \in \mathbb{Z}[X]: f(\alpha)=0\}$ is a principal ideal generated by a monic, irreducible polynomial.

Let $f=X^{4}+2 X^{3}-3 X^{2}-4 X-11$. Show that $\mathbb{Q}[X] /(f)$ is a field. Show that $\mathbb{Z}[X] /(f)$ is an integral domain, but not a field. Justify your answers.

Paper 1, Section I, F

commentDefine what it means for two $n \times n$ matrices $A$ and $B$ to be similar. Define the Jordan normal form of a matrix.

Determine whether the matrices

$A=\left(\begin{array}{ccc} 4 & 6 & -15 \\ 1 & 3 & -5 \\ 1 & 2 & -4 \end{array}\right), \quad B=\left(\begin{array}{ccc} 1 & -3 & 3 \\ -2 & -6 & 13 \\ -1 & -4 & 8 \end{array}\right)$

are similar, carefully stating any theorem you use.

Paper 1, Section II, F

commentLet $\mathcal{M}_{n}$ denote the vector space of $n \times n$ matrices over a field $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$. What is the $\operatorname{rank} r(A)$ of a matrix $A \in \mathcal{M}_{n}$ ?

Show, stating accurately any preliminary results that you require, that $r(A)=n$ if and only if $A$ is non-singular, i.e. $\operatorname{det} A \neq 0$.

Does $\mathcal{M}_{n}$ have a basis consisting of non-singular matrices? Justify your answer.

Suppose that an $n \times n$ matrix $A$ is non-singular and every entry of $A$ is either 0 or 1. Let $c_{n}$ be the largest possible number of 1 's in such an $A$. Show that $c_{n} \leqslant n^{2}-n+1$. Is this bound attained? Justify your answer.

[Standard properties of the adjugate matrix can be assumed, if accurately stated.]

Paper 2, Section II, F

commentLet $V$ be a finite-dimensional vector space over a field. Show that an endomorphism $\alpha$ of $V$ is idempotent, i.e. $\alpha^{2}=\alpha$, if and only if $\alpha$ is a projection onto its image.

Determine whether the following statements are true or false, giving a proof or counterexample as appropriate:

(i) If $\alpha^{3}=\alpha^{2}$, then $\alpha$ is idempotent.

(ii) The condition $\alpha(1-\alpha)^{2}=0$ is equivalent to $\alpha$ being idempotent.

(iii) If $\alpha$ and $\beta$ are idempotent and such that $\alpha+\beta$ is also idempotent, then $\alpha \beta=0$.

(iv) If $\alpha$ and $\beta$ are idempotent and $\alpha \beta=0$, then $\alpha+\beta$ is also idempotent.

Paper 1, Section II, H

commentLet $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with transition matrix $P$. What is a stopping time of $\left(X_{n}\right)_{n \geqslant 0}$ ? What is the strong Markov property?

A porter is trying to apprehend a student who is walking along a long narrow path at night. Being unaware of the porter, the student's location $Y_{n}$ at time $n \geqslant 0$ evolves as a simple symmetric random walk on $\mathbb{Z}$. The porter's initial location $Z_{0}$ is $2 m$ units to the right of the student, so $Z_{0}-Y_{0}=2 m$ where $m \geqslant 1$. The future locations $Z_{n+1}$ of the porter evolve as follows: The porter moves to the left (so $Z_{n+1}=Z_{n}-1$ ) with probability $q \in\left(\frac{1}{2}, 1\right)$, and to the right with probability $1-q$ whenever $Z_{n}-Y_{n}>2$. When $Z_{n}-Y_{n}=2$, the porter's probability of moving left changes to $r \in(0,1)$, and the probability of moving right is $1-r$.

(a) By setting up an appropriate Markov chain, show that for $m \geqslant 2$, the expected time for the porter to be a distance $2(m-1)$ away from the student is $2 /(2 q-1)$.

(b) Show that the expected time for the porter to catch the student, i.e. for their locations to coincide, is

$\frac{2}{r}+\left(m+\frac{1}{r}-2\right) \frac{2}{2 q-1} .$

[You may use without proof the fact that the time for the porter to catch the student is finite with probability 1 for any $m \geqslant 1$.]

Paper 2, Section I, H

commentLet $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with state space $\{1,2\}$ and transition matrix

$P=\left(\begin{array}{cc} 1-\alpha & \alpha \\ \beta & 1-\beta \end{array}\right)$

where $\alpha, \beta \in(0,1]$. Compute $\mathbb{P}\left(X_{n}=1 \mid X_{0}=1\right)$. Find the value of $\mathbb{P}\left(X_{n}=1 \mid X_{0}=2\right)$.

Paper 1, Section II, B

commentConsider the equation

$\nabla^{2} \phi=\delta(x) g(y)$

on the two-dimensional strip $-\infty<x<\infty, 0 \leqslant y \leqslant a$, where $\delta(x)$ is the delta function and $g(y)$ is a smooth function satisfying $g(0)=g(a)=0 . \phi(x, y)$ satisfies the boundary conditions $\phi(x, 0)=\phi(x, a)=0$ and $\lim _{x \rightarrow \pm \infty} \phi(x, y)=0$. By using solutions of Laplace's equation for $x<0$ and $x>0$, matched suitably at $x=0$, find the solution of $(*)$ in terms of Fourier coefficients of $g(y)$.

Find the solution of $(*)$ in the limiting case $g(y)=\delta(y-c)$, where $0<c<a$, and hence determine the Green's function $\phi(x, y)$ in the strip, satisfying

$\nabla^{2} \phi=\delta(x-b) \delta(y-c)$

and the same boundary conditions as before.

Paper 2, Section I, B

commentFind the Fourier transform of the function

$f(x)= \begin{cases}A, & |x| \leqslant 1 \\ 0, & |x|>1\end{cases}$

Determine the convolution of the function $f(x)$ with itself.

State the convolution theorem for Fourier transforms. Using it, or otherwise, determine the Fourier transform of the function

$g(x)= \begin{cases}B(2-|x|), & |x| \leqslant 2 \\ 0, & |x|>2\end{cases}$

Paper 2, Section II, A

comment(i) The solution to the equation

$\frac{d}{d x}\left(x \frac{d F}{d x}\right)+\alpha^{2} x F=0$

that is regular at the origin is $F(x)=C J_{0}(\alpha x)$, where $\alpha$ is a real, positive parameter, $J_{0}$ is a Bessel function, and $C$ is an arbitrary constant. The Bessel function has infinitely many zeros: $J_{0}\left(\gamma_{k}\right)=0$ with $\gamma_{k}>0$, for $k=1,2, \ldots$. Show that

$\int_{0}^{1} J_{0}(\alpha x) J_{0}(\beta x) x d x=\frac{\beta J_{0}(\alpha) J_{0}^{\prime}(\beta)-\alpha J_{0}(\beta) J_{0}^{\prime}(\alpha)}{\alpha^{2}-\beta^{2}}, \quad \alpha \neq \beta$

(where $\alpha$ and $\beta$ are real and positive) and deduce that

$\int_{0}^{1} J_{0}\left(\gamma_{k} x\right) J_{0}\left(\gamma_{\ell} x\right) x d x=0, \quad k \neq \ell ; \quad \int_{0}^{1}\left(J_{0}\left(\gamma_{k} x\right)\right)^{2} x d x=\frac{1}{2}\left(J_{0}^{\prime}\left(\gamma_{k}\right)\right)^{2}$

[Hint: For the second identity, consider $\alpha=\gamma_{k}$ and $\beta=\gamma_{k}+\epsilon$ with $\epsilon$ small.]

(ii) The displacement $z(r, t)$ of the membrane of a circular drum of unit radius obeys

$\frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial z}{\partial r}\right)=\frac{\partial^{2} z}{\partial t^{2}}, \quad z(1, t)=0$

where $r$ is the radial coordinate on the membrane surface, $t$ is time (in certain units), and the displacement is assumed to have no angular dependence. At $t=0$ the drum is struck, so that

$z(r, 0)=0, \quad \frac{\partial z}{\partial t}(r, 0)=\left\{\begin{array}{cc} U, & r<b \\ 0, & r>b \end{array}\right.$

where $U$ and $b<1$ are constants. Show that the subsequent motion is given by

$z(r, t)=\sum_{k=1}^{\infty} C_{k} J_{0}\left(\gamma_{k} r\right) \sin \left(\gamma_{k} t\right) \quad \text { where } \quad C_{k}=-2 b U \frac{J_{0}^{\prime}\left(\gamma_{k} b\right)}{\gamma_{k}^{2}\left(J_{0}^{\prime}\left(\gamma_{k}\right)\right)^{2}}$

Paper 1, Section I, C

comment(a) Find an $L U$ factorisation of the matrix

$A=\left[\begin{array}{cccc} 1 & 1 & 0 & 3 \\ 0 & 2 & 2 & 12 \\ 0 & 5 & 7 & 32 \\ 3 & -1 & -1 & -10 \end{array}\right]$

where the diagonal elements of $L$ are $L_{11}=L_{44}=1, L_{22}=L_{33}=2$.

(b) Use this factorisation to solve the linear system $A \mathbf{x}=\mathbf{b}$, where

$\mathbf{b}=\left[\begin{array}{c} -3 \\ -12 \\ -30 \\ 13 \end{array}\right]$

Paper 1, Section II, C

comment(a) Given a set of $n+1$ distinct real points $x_{0}, x_{1}, \ldots, x_{n}$ and real numbers $f_{0}, f_{1}, \ldots, f_{n}$, show that the interpolating polynomial $p_{n} \in \mathbb{P}_{n}[x], p_{n}\left(x_{i}\right)=f_{i}$, can be written in the form

$p_{n}(x)=\sum_{k=0}^{n} a_{k} \prod_{j=0, j \neq k}^{n} \frac{x-x_{j}}{x_{k}-x_{j}}, \quad x \in \mathbb{R}$

where the coefficients $a_{k}$ are to be determined.

(b) Consider the approximation of the integral of a function $f \in C[a, b]$ by a finite sum,

$\int_{a}^{b} f(x) d x \approx \sum_{k=0}^{s-1} w_{k} f\left(c_{k}\right)$

where the weights $w_{0}, \ldots, w_{s-1}$ and nodes $c_{0}, \ldots, c_{s-1} \in[a, b]$ are independent of $f$. Derive the expressions for the weights $w_{k}$ that make the approximation ( 1$)$ exact for $f$ being any polynomial of degree $s-1$, i.e. $f \in \mathbb{P}_{s-1}[x]$.

Show that by choosing $c_{0}, \ldots, c_{s-1}$ to be zeros of the polynomial $q_{s}(x)$ of degree $s$, one of a sequence of orthogonal polynomials defined with respect to the scalar product

$\langle u, v\rangle=\int_{a}^{b} u(x) v(x) d x$

the approximation (1) becomes exact for $f \in \mathbb{P}_{2 s-1}[x]$ (i.e. for all polynomials of degree $2 s-1)$.

(c) On the interval $[a, b]=[-1,1]$ the scalar product (2) generates orthogonal polynomials given by

$q_{n}(x)=\frac{1}{2^{n} n !} \frac{d^{n}}{d x^{n}}\left(x^{2}-1\right)^{n}, \quad n=0,1,2, \ldots$

Find the values of the nodes $c_{k}$ for which the approximation (1) is exact for all polynomials of degree 7 (i.e. $f \in \mathbb{P}_{7}[x]$ ) but no higher.

Paper 2, Section II, C

commentConsider a multistep method for numerical solution of the differential equation $\mathbf{y}^{\prime}=\mathbf{f}(t, \mathbf{y})$ :

$\mathbf{y}_{n+2}-\mathbf{y}_{n+1}=h\left[(1+\alpha) \mathbf{f}\left(t_{n+2}, \mathbf{y}_{n+2}\right)+\beta \mathbf{f}\left(t_{n+1}, \mathbf{y}_{n+1}\right)-(\alpha+\beta) \mathbf{f}\left(t_{n}, \mathbf{y}_{n}\right)\right],$

where $n=0,1, \ldots$, and $\alpha$ and $\beta$ are constants.

(a) Define the order of a method for numerically solving an ODE.

(b) Show that in general an explicit method of the form $(*)$ has order 1 . Determine the values of $\alpha$ and $\beta$ for which this multistep method is of order 3 .

(c) Show that the multistep method (*) is convergent.

Paper 1, Section I, H

commentSolve the following Optimization problem using the simplex algorithm:

$\begin{array}{rr} \operatorname{maximise} & x_{1}+x_{2} \\ \text { subject to } & \left|x_{1}-2 x_{2}\right| \leqslant 2 \\ & 4 x_{1}+x_{2} \leqslant 4, \quad x_{1}, x_{2} \geqslant 0 \end{array}$

Suppose the constraints above are now replaced by $\left|x_{1}-2 x_{2}\right| \leqslant 2+\epsilon_{1}$ and $4 x_{1}+x_{2} \leqslant 4+\epsilon_{2}$. Give an expression for the maximum objective value that is valid for all sufficiently small non-zero $\epsilon_{1}$ and $\epsilon_{2}$.

Paper 2, Section II, H

commentState and prove the Lagrangian sufficiency theorem.

Solve, using the Lagrangian method, the optimization problem

$\begin{array}{ll} \operatorname{maximise} & x+y+2 a \sqrt{1+z} \\ \text { subject to } & x+\frac{1}{2} y^{2}+z=b \\ & x, z \geqslant 0 \end{array}$

where the constants $a$ and $b$ satisfy $a \geqslant 1$ and $b \geqslant 1 / 2$.

[You need not prove that your solution is unique.]

Paper 1, Section I,

commentDefine what it means for an operator $Q$ to be hermitian and briefly explain the significance of this definition in quantum mechanics.

Define the uncertainty $(\Delta Q)_{\psi}$ of $Q$ in a state $\psi$. If $P$ is also a hermitian operator, show by considering the state $(Q+i \lambda P) \psi$, where $\lambda$ is a real number, that

$\left\langle Q^{2}\right\rangle_{\psi}\left\langle P^{2}\right\rangle_{\psi} \geqslant \frac{1}{4}\left|\langle i[Q, P]\rangle_{\psi}\right|^{2}$

Hence deduce that

$(\Delta Q)_{\psi}(\Delta P)_{\psi} \geqslant \frac{1}{2}\left|\langle i[Q, P]\rangle_{\psi}\right|$

Give a physical interpretation of this result.

Paper 1, Section II, A

commentConsider a quantum system with Hamiltonian $H$ and wavefunction $\Psi$ obeying the time-dependent Schrödinger equation. Show that if $\Psi$ is a stationary state then $\langle Q\rangle_{\Psi}$ is independent of time, if the observable $Q$ is independent of time.

A particle of mass $m$ is confined to the interval $0 \leqslant x \leqslant a$ by infinite potential barriers, but moves freely otherwise. Let $\Psi(x, t)$ be the normalised wavefunction for the particle at time $t$, with

$\Psi(x, 0)=c_{1} \psi_{1}(x)+c_{2} \psi_{2}(x)$

where

$\psi_{1}(x)=\left(\frac{2}{a}\right)^{1 / 2} \sin \frac{\pi x}{a}, \quad \psi_{2}(x)=\left(\frac{2}{a}\right)^{1 / 2} \sin \frac{2 \pi x}{a}$

and $c_{1}, c_{2}$ are complex constants. If the energy of the particle is measured at time $t$, what are the possible results, and what is the probability for each result to be obtained? Give brief justifications of your answers.

Calculate $\langle\hat{x}\rangle_{\Psi}$ at time $t$ and show that the result oscillates with a frequency $\omega$, to be determined. Show in addition that

$\left|\langle\hat{x}\rangle_{\Psi}-\frac{a}{2}\right| \leqslant \frac{16 a}{9 \pi^{2}} .$

Paper 2, Section II, A

comment(a) The potential $V(x)$ for a particle of mass $m$ in one dimension is such that $V \rightarrow 0$ rapidly as $x \rightarrow \pm \infty$. Let $\psi(x)$ be a wavefunction for the particle satisfying the time-independent Schrödinger equation with energy $E$.

Suppose $\psi$ has the asymptotic behaviour

$\psi(x) \sim A e^{i k x}+B e^{-i k x} \quad(x \rightarrow-\infty), \quad \psi(x) \sim C e^{i k x} \quad(x \rightarrow+\infty)$

where $A, B, C$ are complex coefficients. Explain, in outline, how the probability current $j(x)$ is used in the interpretation of such a solution as a scattering process and how the transmission and reflection probabilities $P_{\mathrm{tr}}$ and $P_{\text {ref }}$ are found.

Now suppose instead that $\psi(x)$ is a bound state solution. Write down the asymptotic behaviour in this case, relating an appropriate parameter to the energy $E$.

(b) Consider the potential

$V(x)=-\frac{\hbar^{2}}{m} \frac{a^{2}}{\cosh ^{2} a x}$

where $a$ is a real, positive constant. Show that

$\psi(x)=N e^{i k x}(a \tanh a x-i k)$

where $N$ is a complex coefficient, is a solution of the time-independent Schrödinger equation for any real $k$ and find the energy $E$. Show that $\psi$ represents a scattering process for which $P_{\text {ref }}=0$, and find $P_{\mathrm{tr}}$ explicitly.

Now let $k=i \lambda$ in the formula for $\psi$ above. Show that this defines a bound state if a certain real positive value of $\lambda$ is chosen and find the energy of this solution.

Paper 1, Section I, $\mathbf{6 H}$

commentSuppose $X_{1}, \ldots, X_{n}$ are independent with distribution $N(\mu, 1)$. Suppose a prior $\mu \sim N\left(\theta, \tau^{-2}\right)$ is placed on the unknown parameter $\mu$ for some given deterministic $\theta \in \mathbb{R}$ and $\tau>0$. Derive the posterior mean.

Find an expression for the mean squared error of this posterior mean when $\theta=0$.

Paper 1, Section II, H

commentLet $X_{1}, \ldots, X_{n}$ be i.i.d. $U[0,2 \theta]$ random variables, where $\theta>0$ is unknown.

(a) Derive the maximum likelihood estimator $\hat{\theta}$ of $\theta$.

(b) What is a sufficient statistic? What is a minimal sufficient statistic? Is $\hat{\theta}$ sufficient for $\theta$ ? Is it minimal sufficient? Answer the same questions for the sample mean $\tilde{\theta}:=\sum_{i=1}^{n} X_{i} / n$. Briefly justify your answers.

[You may use any result from the course provided it is stated clearly.]

(c) Show that the mean squared errors of $\hat{\theta}$ and $\tilde{\theta}$ are respectively

$\frac{2 \theta^{2}}{(n+1)(n+2)} \quad \text { and } \quad \frac{\theta^{2}}{3 n} \text {. }$

(d) Show that for each $t \in \mathbb{R}, \lim _{n \rightarrow \infty} \mathbb{P}(n(1-\hat{\theta} / \theta) \geqslant t)=h(t)$ for a function $h$ you should specify. Give, with justification, an approximate $1-\alpha$ confidence interval for $\theta$ whose expected length is

$\left(\frac{n \theta}{n+1}\right)\left(\frac{\log (1 / \alpha)}{n-\log (1 / \alpha)}\right)$

[Hint: $\lim _{n \rightarrow \infty}\left(1-\frac{t}{n}\right)^{n}=e^{-t}$ for all $t \in \mathbb{R}$.]

Paper 2, Section II, H

commentConsider the general linear model $Y=X \beta^{0}+\varepsilon$ where $X$ is a known $n \times p$ design matrix with $p \geqslant 2, \beta^{0} \in \mathbb{R}^{p}$ is an unknown vector of parameters, and $\varepsilon \in \mathbb{R}^{n}$ is a vector of stochastic errors with $\mathbb{E}\left(\varepsilon_{i}\right)=0, \operatorname{var}\left(\varepsilon_{i}\right)=\sigma^{2}>0$ and $\operatorname{cov}\left(\varepsilon_{i}, \varepsilon_{j}\right)=0$ for all $i, j=1, \ldots, n$ with $i \neq j$. Suppose $X$ has full column rank.

(a) Write down the least squares estimate $\hat{\beta}$ of $\beta^{0}$ and show that it minimises the least squares objective $S(\beta)=\|Y-X \beta\|^{2}$ over $\beta \in \mathbb{R}^{p}$.

(b) Write down the variance-covariance matrix $\operatorname{cov}(\hat{\beta})$.

(c) Let $\tilde{\beta} \in \mathbb{R}^{p}$ minimise $S(\beta)$ over $\beta \in \mathbb{R}^{p}$ subject to $\beta_{p}=0$. Let $Z$ be the $n \times(p-1)$ submatrix of $X$ that excludes the final column. Write $\operatorname{down} \operatorname{cov}(\tilde{\beta})$.

(d) Let $P$ and $P_{0}$ be $n \times n$ orthogonal projections onto the column spaces of $X$ and $Z$ respectively. Show that for all $u \in \mathbb{R}^{n}, u^{T} P u \geqslant u^{T} P_{0} u$.

(e) Show that for all $x \in \mathbb{R}^{p}$,

$\operatorname{var}\left(x^{T} \tilde{\beta}\right) \leqslant \operatorname{var}\left(x^{T} \hat{\beta}\right) .$

[Hint: Argue that $x=X^{T} u$ for some $u \in \mathbb{R}^{n}$.]

Paper 1, Section II, D

commentA motion sensor sits at the origin, in the middle of a field. The probability that you are detected as you sneak from one point to another along a path $\mathbf{x}(t): 0 \leqslant t \leqslant T$ is

$P[\mathbf{x}(t)]=\lambda \int_{0}^{T} \frac{v(t)}{r(t)} d t$

where $\lambda$ is a positive constant, $r(t)$ is your distance to the sensor, and $v(t)$ is your speed. (If $P[\mathbf{x}(t)] \geqslant 1$ for some path then you are detected with certainty.)

You start at point $(x, y)=(A, 0)$, where $A>0$. Your mission is to reach the point $(x, y)=(B \cos \alpha, B \sin \alpha)$, where $B>0$. What path should you take to minimise the chance of detection? Should you tiptoe or should you run?

A new and improved sensor detects you with probability

$\tilde{P}[\mathbf{x}(t)]=\lambda \int_{0}^{T} \frac{v(t)^{2}}{r(t)} d t$

Show that the optimal path now satisfies the equation

$\left(\frac{d r}{d t}\right)^{2}=E r-h^{2}$

for some constants $E$ and $h$ that you should identify.

Paper 2, Section I, D

commentFind the stationary points of the function $\phi=x y z$ subject to the constraint $x+a^{2} y^{2}+z^{2}=b^{2}$, with $a, b>0$. What are the maximum and minimum values attained by $\phi$, subject to this constraint, if we further restrict to $x \geqslant 0$ ?