• # Paper 4, Section I, $2 F$

Let $X$ be a topological space with an equivalence relation, $\tilde{X}$ the set of equivalence classes, $\pi: X \rightarrow \tilde{X}$, the quotient map taking a point in $X$ to its equivalence class.

(a) Define the quotient topology on $\tilde{X}$ and check it is a topology.

(b) Prove that if $Y$ is a topological space, a map $f: \tilde{X} \rightarrow Y$ is continuous if and only if $f \circ \pi$ is continuous.

(c) If $X$ is Hausdorff, is it true that $\tilde{X}$ is also Hausdorff? Justify your answer.

comment
• # Paper 4, Section II, F

(a) Let $g:[0,1] \times \mathbb{R}^{n} \rightarrow \mathbb{R}$ be a continuous function such that for each $t \in[0,1]$, the partial derivatives $D_{i} g(t, x)(i=1, \ldots, n)$ of $x \mapsto g(t, x)$ exist and are continuous on $[0,1] \times \mathbb{R}^{n}$. Define $G: \mathbb{R}^{n} \rightarrow \mathbb{R}$ by

$G(x)=\int_{0}^{1} g(t, x) d t$

Show that $G$ has continuous partial derivatives $D_{i} G$ given by

$D_{i} G(x)=\int_{0}^{1} D_{i} g(t, x) d t$

for $i=1, \ldots, n$.

(b) Let $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$ be an infinitely differentiable function, that is, partial derivatives $D_{i_{1}} D_{i_{2}} \cdots D_{i_{k}} f$ exist and are continuous for all $k \in \mathbb{N}$ and $i_{1}, \ldots, i_{k} \in\{1,2\}$. Show that for any $\left(x_{1}, x_{2}\right) \in \mathbb{R}^{2}$,

$f\left(x_{1}, x_{2}\right)=f\left(x_{1}, 0\right)+x_{2} D_{2} f\left(x_{1}, 0\right)+x_{2}^{2} h\left(x_{1}, x_{2}\right)$

where $h: \mathbb{R}^{2} \rightarrow \mathbb{R}$ is an infinitely differentiable function.

[Hint: You may use the fact that if $u: \mathbb{R} \rightarrow \mathbb{R}$ is infinitely differentiable, then

$\left.u(1)=u(0)+u^{\prime}(0)+\int_{0}^{1}(1-t) u^{\prime \prime}(t) d t .\right]$

comment

• # Paper 4, Section I, $3 G$

Let $f$ be a holomorphic function on a neighbourhood of $a \in \mathbb{C}$. Assume that $f$ has a zero of order $k$ at $a$ with $k \geqslant 1$. Show that there exist $\varepsilon>0$ and $\delta>0$ such that for any $b$ with $0<|b|<\varepsilon$ there are exactly $k$ distinct values of $z \in D(a, \delta)$ with $f(z)=b$.

comment

• # Paper 4, Section II, B

Let $f(t)$ be defined for $t \geqslant 0$. Define the Laplace transform $\widehat{f}(s)$ of $f$. Find an expression for the Laplace transform of $\frac{d f}{d t}$ in terms of $\widehat{f}$.

Three radioactive nuclei decay sequentially, so that the numbers $N_{i}(t)$ of the three types obey the equations

\begin{aligned} \frac{d N_{1}}{d t} &=-\lambda_{1} N_{1} \\ \frac{d N_{2}}{d t} &=\lambda_{1} N_{1}-\lambda_{2} N_{2} \\ \frac{d N_{3}}{d t} &=\lambda_{2} N_{2}-\lambda_{3} N_{3} \end{aligned}

where $\lambda_{3}>\lambda_{2}>\lambda_{1}>0$ are constants. Initially, at $t=0, N_{1}=N, N_{2}=0$ and $N_{3}=n$. Using Laplace transforms, find $N_{3}(t)$.

By taking an appropriate limit, find $N_{3}(t)$ when $\lambda_{2}=\lambda_{1}=\lambda>0$ and $\lambda_{3}>\lambda$.

comment

• # Paper 4, Section I, $5 \mathrm{D}$

Write down Maxwell's equations in a vacuum. Show that they admit wave solutions with

$\mathbf{B}(\mathbf{x}, t)=\operatorname{Re}\left[\mathbf{B}_{0} e^{i(\mathbf{k} \cdot \mathbf{x}-\omega t)}\right]$

where $\mathbf{B}_{0}, \mathbf{k}$ and $\omega$ must obey certain conditions that you should determine. Find the corresponding electric field $\mathbf{E}(\mathbf{x}, t)$.

A light wave, travelling in the $x$-direction and linearly polarised so that the magnetic field points in the $z$-direction, is incident upon a conductor that occupies the half-space $x>0$. The electric and magnetic fields obey the boundary conditions $\mathbf{E} \times \mathbf{n}=\mathbf{0}$ and $\mathbf{B} \cdot \mathbf{n}=0$ on the surface of the conductor, where $\mathbf{n}$ is the unit normal vector. Determine the contributions to the magnetic field from the incident and reflected waves in the region $x \leqslant 0$. Compute the magnetic field tangential to the surface of the conductor.

comment

• # Paper 4, Section II, A

Consider the spherically symmetric motion induced by the collapse of a spherical cavity of radius $a(t)$, centred on the origin. For $r, there is a vacuum, while for $r>a$, there is an inviscid incompressible fluid with constant density $\rho$. At time $t=0, a=a_{0}$, and the fluid is at rest and at constant pressure $p_{0}$.

(a) Consider the radial volume transport in the fluid $Q(R, t)$, defined as

$Q(R, t)=\int_{r=R} u d S$

where $u$ is the radial velocity, and $d S$ is an infinitesimal element of the surface of a sphere of radius $R \geqslant a$. Use the incompressibility condition to establish that $Q$ is a function of time alone.

(b) Using the expression for pressure in potential flow or otherwise, establish that

$\frac{1}{4 \pi a} \frac{d Q}{d t}-\frac{(\dot{a})^{2}}{2}=-\frac{p_{0}}{\rho}$

where $\dot{a}(t)$ is the radial velocity of the cavity boundary.

(c) By expressing $Q(t)$ in terms of $a$ and $\dot{a}$, show that

$\dot{a}=-\sqrt{\frac{2 p_{0}}{3 \rho}\left(\frac{a_{0}^{3}}{a^{3}}-1\right)}$

[Hint: You may find it useful to assume $\dot{a}(t)$ is an explicit function of a from the outset.]

(d) Hence write down an integral expression for the implosion time $\tau$, i.e. the time for the radius of the cavity $a \rightarrow 0$. [Do not attempt to evaluate the integral.]

comment

• # Paper 4, Section II, F

Define an abstract smooth surface and explain what it means for the surface to be orientable. Given two smooth surfaces $S_{1}$ and $S_{2}$ and a map $f: S_{1} \rightarrow S_{2}$, explain what it means for $f$ to be smooth

For the cylinder

$C=\left\{(x, y, z) \in \mathbb{R}^{3}: x^{2}+y^{2}=1\right\},$

let $a: C \rightarrow C$ be the orientation reversing diffeomorphism $a(x, y, z)=(-x,-y,-z)$. Let $S$ be the quotient of $C$ by the equivalence relation $p \sim a(p)$ and let $\pi: C \rightarrow S$ be the canonical projection map. Show that $S$ can be made into an abstract smooth surface so that $\pi$ is smooth. Is $S$ orientable? Justify your answer.

comment

• # Paper 4, Section II, G

Let $H$ and $P$ be subgroups of a finite group $G$. Show that the sets $H x P, x \in G$, partition $G$. By considering the action of $H$ on the set of left cosets of $P$ in $G$ by left multiplication, or otherwise, show that

$\frac{|H x P|}{|P|}=\frac{|H|}{\left|H \cap x P x^{-1}\right|}$

for any $x \in G$. Deduce that if $G$ has a Sylow $p$-subgroup, then so does $H$.

Let $p, n \in \mathbb{N}$ with $p$ a prime. Write down the order of the group $G L_{n}(\mathbb{Z} / p \mathbb{Z})$. Identify in $G L_{n}(\mathbb{Z} / p \mathbb{Z})$ a Sylow $p$-subgroup and a subgroup isomorphic to the symmetric group $S_{n}$. Deduce that every finite group has a Sylow $p$-subgroup.

State Sylow's theorem on the number of Sylow $p$-subgroups of a finite group.

Let $G$ be a group of order $p q$, where $p>q$ are prime numbers. Show that if $G$ is non-abelian, then $q \mid p-1$.

comment

• # Paper 4, Section I, $1 \mathbf{E}$

Let $\operatorname{Mat}_{n}(\mathbb{C})$ be the vector space of $n$ by $n$ complex matrices.

Given $A \in \operatorname{Mat}_{n}(\mathbb{C})$, define the linear $\operatorname{map}_{A}: \operatorname{Mat}_{n}(\mathbb{C}) \rightarrow \operatorname{Mat}_{n}(\mathbb{C})$,

$X \mapsto A X-X A$

(i) Compute a basis of eigenvectors, and their associated eigenvalues, when $A$ is the diagonal matrix

$A=\left(\begin{array}{llll} 1 & & & \\ & 2 & & \\ & & \ddots & \\ & & & n \end{array}\right)$

What is the rank of $\varphi_{A}$ ?

(ii) Now let $A=\left(\begin{array}{ll}0 & 1 \\ 0 & 0\end{array}\right)$. Write down the matrix of the linear transformation $\varphi_{A}$ with respect to the standard basis of $\operatorname{Mat}_{2}(\mathbb{C})$.

What is its Jordan normal form?

comment
• # Paper 4, Section II, E

(a) Let $V$ be a complex vector space of dimension $n$.

What is a Hermitian form on $V$ ?

Given a Hermitian form, define the matrix $A$ of the form with respect to the basis $v_{1}, \ldots, v_{n}$ of $V$, and describe in terms of $A$ the value of the Hermitian form on two elements of $V$.

Now let $w_{1}, \ldots, w_{n}$ be another basis of $V$. Suppose $w_{i}=\sum_{j} p_{i j} v_{j}$, and let $P=\left(p_{i j}\right)$. Write down the matrix of the form with respect to this new basis in terms of $A$ and $P$.

Let $N=V^{\perp}$. Describe the dimension of $N$ in terms of the matrix $A$.

(b) Write down the matrix of the real quadratic form

$x^{2}+y^{2}+2 z^{2}+2 x y+2 x z-2 y z .$

Using the Gram-Schmidt algorithm, find a basis which diagonalises the form. What are its rank and signature?

(c) Let $V$ be a real vector space, and $\langle,$,$rangle a symmetric bilinear form on it. Let A$ be the matrix of this form in some basis.

Prove that the signature of $\langle,$,$rangle is the number of positive eigenvalues of A$ minus the number of negative eigenvalues.

Explain, using an example, why the eigenvalues themselves depend on the choice of a basis.

comment

• # Paper 4, Section I, H

Show that the simple symmetric random walk on $\mathbb{Z}$ is recurrent.

Three particles perform independent simple symmetric random walks on $\mathbb{Z}$. What is the probability that they are all simultaneously at 0 infinitely often? Justify your answer.

[You may assume without proof that there exist constants $A, B>0$ such that $A \sqrt{n}(n / e)^{n} \leqslant n ! \leqslant B \sqrt{n}(n / e)^{n}$ for all positive integers $\left.n .\right]$

comment

• # Paper 4, Section II, C

The function $\theta(x, t)$ obeys the diffusion equation

$\frac{\partial \theta}{\partial t}=D \frac{\partial^{2} \theta}{\partial x^{2}}$

Verify that

$\theta(x, t)=\frac{1}{\sqrt{t}} e^{-x^{2} / 4 D t}$

is a solution of $(*)$, and by considering $\int_{-\infty}^{\infty} \theta(x, t) d x$, find the solution having the initial form $\theta(x, 0)=\delta(x)$ at $t=0$.

Find, in terms of the error function, the solution of $(*)$ having the initial form

$\theta(x, 0)= \begin{cases}1, & |x| \leqslant 1 \\ 0, & |x|>1\end{cases}$

Sketch a graph of this solution at various times $t \geqslant 0$.

[The error function is

$\left.\operatorname{Erf}(x)=\frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-y^{2}} d y .\right]$

comment

• # Paper 4, Section I, B

(a) Given the data $f(0)=0, f(1)=4, f(2)=2, f(3)=8$, find the interpolating cubic polynomial $p_{3} \in \mathbb{P}_{3}[x]$ in the Newton form.

(b) We add to the data one more value, $f(-2)=10$. Find the interpolating quartic polynomial $p_{4} \in \mathbb{P}_{4}[x]$ for the extended data in the Newton form.

comment

• # Paper 4, Section II, H

(a) Consider the linear program

\begin{aligned} P: \quad \text { maximise over } x \geqslant 0, & c^{T} x \\ \text { subject to } & A x=b \end{aligned}

where $A \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^{n}$ and $b \in \mathbb{R}^{m}$. What is meant by a basic feasible solution?

(b) Prove that if $P$ has a finite maximum, then there exists a solution that is a basic feasible solution.

(c) Now consider the optimization problem

$Q: \quad \text { maximise over } x \geqslant 0, \quad \frac{c^{T} x}{d^{T} x}$

subject to $A x=b$,

$d^{T} x>0,$

where matrix $A$ and vectors $c, b$ are as in the problem $P$, and $d \in \mathbb{R}^{n}$. Suppose there exists a solution $x^{*}$ to $Q$. Further consider the linear program

\begin{aligned} R: \quad \text { maximise over } y \geqslant 0, t \geqslant 0, & c^{T} y \\ & A y=b t \\ & d^{T} y=1 \end{aligned}

(i) Suppose $d_{i}>0$ for all $i=1, \ldots, n$. Show that the maximum of $R$ is finite and at least as large as that of $Q$.

(ii) Suppose, in addition to the condition in part (i), that the entries of $A$ are strictly positive. Show that the maximum of $R$ is equal to that of $Q$.

(iii) Let $\mathcal{B}$ be the set of basic feasible solutions of the linear program $P$. Assuming the conditions in parts (i) and (ii) above, show that

$\frac{c^{T} x^{*}}{d^{T} x^{*}}=\max _{x \in \mathcal{B}} \frac{c^{T} x}{d^{T} x}$

[Hint: Argue that if $(y, t)$ is in the set $\mathcal{A}$ of basic feasible solutions to $R$, then $y / t \in \mathcal{B} .]$

comment

• # Paper 4, Section I, C

Let $\Psi(x, t)$ be the wavefunction for a particle of mass $m$ moving in one dimension in a potential $U(x)$. Show that, with suitable boundary conditions as $x \rightarrow \pm \infty$,

$\frac{d}{d t} \int_{-\infty}^{\infty}|\Psi(x, t)|^{2} d x=0$

Why is this important for the interpretation of quantum mechanics?

Verify the result above by first calculating $|\Psi(x, t)|^{2}$ for the free particle solution

$\Psi(x, t)=C f(t)^{1 / 2} \exp \left(-\frac{1}{2} f(t) x^{2}\right) \quad \text { with } \quad f(t)=\left(\alpha+\frac{i \hbar}{m} t\right)^{-1}$

where $C$ and $\alpha>0$ are real constants, and then considering the resulting integral.

comment
• # Paper 4, Section II, C

(a) Consider the angular momentum operators $\hat{L}_{x}, \hat{L}_{y}, \hat{L}_{z}$ and $\hat{\mathbf{L}}^{2}=\hat{L}_{x}^{2}+\hat{L}_{y}^{2}+\hat{L}_{z}^{2}$ where

$\hat{L}_{z}=\hat{x} \hat{p}_{y}-\hat{y} \hat{p}_{x}, \quad \hat{L}_{x}=\hat{y} \hat{p}_{z}-\hat{z} \hat{p}_{y} \text { and } \hat{L}_{y}=\hat{z} \hat{p}_{x}-\hat{x} \hat{p}_{z} .$

Use the standard commutation relations for these operators to show that

$\hat{L}_{\pm}=\hat{L}_{x} \pm i \hat{L}_{y} \quad \text { obeys } \quad\left[\hat{L}_{z}, \hat{L}_{\pm}\right]=\pm \hbar \hat{L}_{\pm} \quad \text { and } \quad\left[\hat{\mathbf{L}}^{2}, \hat{L}_{\pm}\right]=0$

Deduce that if $\varphi$ is a joint eigenstate of $\hat{L}_{z}$ and $\hat{\mathbf{L}}^{2}$ with angular momentum quantum numbers $m$ and $\ell$ respectively, then $\hat{L}_{\pm} \varphi$ are also joint eigenstates, provided they are non-zero, with quantum numbers $m \pm 1$ and $\ell$.

(b) A harmonic oscillator of mass $M$ in three dimensions has Hamiltonian

$\hat{H}=\frac{1}{2 M}\left(\hat{p}_{x}^{2}+\hat{p}_{y}^{2}+\hat{p}_{z}^{2}\right)+\frac{1}{2} M \omega^{2}\left(\hat{x}^{2}+\hat{y}^{2}+\hat{z}^{2}\right) .$

Find eigenstates of $\hat{H}$ in terms of eigenstates $\psi_{n}$ for an oscillator in one dimension with $n=0,1,2, \ldots$ and eigenvalues $\hbar \omega\left(n+\frac{1}{2}\right)$; hence determine the eigenvalues $E$ of $\hat{H}$.

Verify that the ground state for $\hat{H}$ is a joint eigenstate of $\hat{L}_{z}$ and $\hat{\mathbf{L}}^{2}$ with $\ell=m=0$. At the first excited energy level, find an eigenstate of $\hat{L}_{z}$ with $m=0$ and construct from this two eigenstates of $\hat{L}_{z}$ with $m=\pm 1$.

Why should you expect to find joint eigenstates of $\hat{L}_{z}, \hat{\mathbf{L}}^{2}$ and $\hat{H}$ ?

[ The first two eigenstates for an oscillator in one dimension are $\psi_{0}(x)=$ $C_{0} \exp \left(-M \omega x^{2} / 2 \hbar\right)$ and $\psi_{1}(x)=C_{1} x \exp \left(-M \omega x^{2} / 2 \hbar\right)$, where $C_{0}$ and $C_{1}$ are normalisation constants. ]

comment

• # Paper 4, Section II, $\mathbf{1 7 H}$

Suppose we wish to estimate the probability $\theta \in(0,1)$ that a potentially biased coin lands heads up when tossed. After $n$ independent tosses, we observe $X$ heads.

(a) Write down the maximum likelihood estimator $\hat{\theta}$ of $\theta$.

(b) Find the mean squared error $f(\theta)$ of $\hat{\theta}$ as a function of $\theta$. Compute $\sup _{\theta \in(0,1)} f(\theta)$.

(c) Suppose a uniform prior is placed on $\theta$. Find the Bayes estimator of $\theta$ under squared error loss $L(\theta, a)=(\theta-a)^{2}$.

(d) Now find the Bayes estimator $\tilde{\theta}$ under the $\operatorname{loss} L(\theta, a)=\theta^{\alpha-1}(1-\theta)^{\beta-1}(\theta-a)^{2}$, where $\alpha, \beta \geqslant 1$. Show that

$\tilde{\theta}=w \hat{\theta}+(1-w) \theta_{0},$

where $w$ and $\theta_{0}$ depend on $n, \alpha$ and $\beta$.

(e) Determine the mean squared error $g_{w, \theta_{0}}(\theta)$ of $\tilde{\theta}$ as defined by $(*)$.

(f) For what range of values of $w$ do we have $\sup _{\theta \in(0,1)} g_{w, 1 / 2}(\theta) \leqslant \sup _{\theta \in(0,1)} f(\theta)$ ?

[Hint: The mean of a Beta $(a, b)$ distribution is $a /(a+b)$ and its density $p(u)$ at $u \in[0,1]$ is $c_{a, b} u^{a-1}(1-u)^{b-1}$, where $c_{a, b}$ is a normalising constant.]

comment

• # Paper 4 , Section II, 13D

(a) Consider the functional

$I[y]=\int_{a}^{b} L\left(y, y^{\prime} ; x\right) d x$

where $0, and $y(x)$ is subject to the requirement that $y(a)$ and $y(b)$ are some fixed constants. Derive the equation satisfied by $y(x)$ when $\delta I=0$ for all variations $\delta y$ that respect the boundary conditions.

(b) Consider the function

$L\left(y, y^{\prime} ; x\right)=\frac{\sqrt{1+y^{\prime 2}}}{x} .$

Verify that, if $y(x)$ describes an arc of a circle, with centre on the $y$-axis, then $\delta I=0$.

(c) Consider the function

$L\left(y, y^{\prime} ; x\right)=\frac{\sqrt{1+y^{\prime 2}}}{y}$

Find $y(x)$ such that $\delta I=0$ subject to the requirement that $y(a)=a$ and $y(b)=\sqrt{2 a b-b^{2}}$, with $b<2 a$. Sketch the curve $y(x)$.

comment