• # Paper 1, Section II, E

State what it means for a function $f: \mathbb{R}^{m} \rightarrow \mathbb{R}^{r}$ to be differentiable at a point $x \in \mathbb{R}^{m}$, and define its derivative $f^{\prime}(x) .$

Let $\mathcal{M}_{n}$ be the vector space of $n \times n$ real-valued matrices, and let $p: \mathcal{M}_{n} \rightarrow \mathcal{M}_{n}$ be given by $p(A)=A^{3}-3 A-I$. Show that $p$ is differentiable at any $A \in \mathcal{M}_{n}$, and calculate its derivative.

State the inverse function theorem for a function $f$. In the case when $f(0)=0$ and $f^{\prime}(0)=I$, prove the existence of a continuous local inverse function in a neighbourhood of 0 . [The rest of the proof of the inverse function theorem is not expected.]

Show that there exists a positive $\epsilon$ such that there is a continuously differentiable function $q: D_{\epsilon}(I) \rightarrow \mathcal{M}_{n}$ such that $p \circ q=\left.\mathrm{id}\right|_{D_{\epsilon}(I)}$. Is it possible to find a continuously differentiable inverse to $p$ on the whole of $\mathcal{M}_{n}$ ? Justify your answer.

comment

• # Paper 1, Section I, G

Let $D$ be the open disc with centre $e^{2 \pi i / 6}$ and radius 1 , and let $L$ be the open lower half plane. Starting with a suitable Möbius map, find a conformal equivalence (or conformal bijection) of $D \cap L$ onto the open unit disc.

comment
• # Paper 1, Section II, G

Let $\ell(z)$ be an analytic branch of $\log z$ on a domain $D \subset \mathbb{C} \backslash\{0\}$. Write down an analytic branch of $z^{1 / 2}$ on $D$. Show that if $\psi_{1}(z)$ and $\psi_{2}(z)$ are two analytic branches of $z^{1 / 2}$ on $D$, then either $\psi_{1}(z)=\psi_{2}(z)$ for all $z \in D$ or $\psi_{1}(z)=-\psi_{2}(z)$ for all $z \in D$.

Describe the principal value or branch $\sigma_{1}(z)$ of $z^{1 / 2}$ on $D_{1}=\mathbb{C} \backslash\{x \in \mathbb{R}: x \leqslant 0\}$. Describe a branch $\sigma_{2}(z)$ of $z^{1 / 2}$ on $D_{2}=\mathbb{C} \backslash\{x \in \mathbb{R}: x \geqslant 0\}$.

Construct an analytic branch $\varphi(z)$ of $\sqrt{1-z^{2}}$ on $\mathbb{C} \backslash\{x \in \mathbb{R}:-1 \leqslant x \leqslant 1\}$ with $\varphi(2 i)=\sqrt{5}$. [If you choose to use $\sigma_{1}$ and $\sigma_{2}$ in your construction, then you may assume without proof that they are analytic.]

Show that for $0<|z|<1$ we have $\varphi(1 / z)=-i \sigma_{1}\left(1-z^{2}\right) / z$. Hence find the first three terms of the Laurent series of $\varphi(1 / z)$ about 0 .

Set $f(z)=\varphi(z) /\left(1+z^{2}\right)$ for $|z|>1$ and $g(z)=f(1 / z) / z^{2}$ for $0<|z|<1$. Compute the residue of $g$ at 0 and use it to compute the integral

$\int_{|z|=2} f(z) d z$

comment

• # Paper 1, Section II, D

Write down the electric potential due to a point charge $Q$ at the origin.

A dipole consists of a charge $Q$ at the origin, and a charge $-Q$ at position $-\mathbf{d}$. Show that, at large distances, the electric potential due to such a dipole is given by

$\Phi(\mathbf{x})=\frac{1}{4 \pi \epsilon_{0}} \frac{\mathbf{p} \cdot \mathbf{x}}{|\mathbf{x}|^{3}}$

where $\mathbf{p}=Q \mathbf{d}$ is the dipole moment. Hence show that the potential energy between two dipoles $\mathbf{p}_{1}$ and $\mathbf{p}_{2}$, with separation $\mathbf{r}$, where $|\mathbf{r}| \gg|\mathbf{d}|$, is

$U=\frac{1}{8 \pi \epsilon_{0}}\left(\frac{\mathbf{p}_{1} \cdot \mathbf{p}_{2}}{r^{3}}-\frac{3\left(\mathbf{p}_{1} \cdot \mathbf{r}\right)\left(\mathbf{p}_{2} \cdot \mathbf{r}\right)}{r^{5}}\right)$

Dipoles are arranged on an infinite chessboard so that they make an angle $\theta$ with the horizontal in an alternating pattern as shown in the figure. Compute the energy between a given dipole and its four nearest neighbours, and show that this is independent of $\theta$.

comment

• # Paper 1, Section II, C

Steady two-dimensional potential flow of an incompressible fluid is confined to the wedge $0<\theta<\alpha$, where $(r, \theta)$ are polar coordinates centred on the vertex of the wedge and $0<\alpha<\pi$.

(a) Show that a velocity potential $\phi$ of the form

$\phi(r, \theta)=A r^{\gamma} \cos (\lambda \theta),$

where $A, \gamma$ and $\lambda$ are positive constants, satisfies the condition of incompressible flow, provided that $\gamma$ and $\lambda$ satisfy a certain relation to be determined.

Assuming that $u_{\theta}$, the $\theta$-component of velocity, does not change sign within the wedge, determine the values of $\gamma$ and $\lambda$ by using the boundary conditions.

(b) Calculate the shape of the streamlines of this flow, labelling them by the distance $r_{\min }$ of closest approach to the vertex. Sketch the streamlines.

(c) Show that the speed $|\mathbf{u}|$ and pressure $p$ are independent of $\theta$. Assuming that at some radius $r=r_{0}$ the speed and pressure are $u_{0}$ and $p_{0}$, respectively, find the pressure difference in the flow between the vertex of the wedge and $r_{0}$.

[Hint: In polar coordinates $(r, \theta)$,

$\nabla f=\left(\frac{\partial f}{\partial r}, \frac{1}{r} \frac{\partial f}{\partial \theta}\right) \quad \text { and } \quad \nabla \cdot \mathbf{F}=\frac{1}{r} \frac{\partial}{\partial r}\left(r F_{r}\right)+\frac{1}{r} \frac{\partial F_{\theta}}{\partial \theta}$

for a scalar $f$ and a vector $\mathbf{F}=\left(F_{r}, F_{\theta}\right)$.]

comment

• # Paper 1, Section I, E

Define the Gauss map of a smooth embedded surface. Consider the surface of revolution $S$ with points

$\left(\begin{array}{c} (2+\cos v) \cos u \\ (2+\cos v) \sin u \\ \sin v \end{array}\right) \in \mathbb{R}^{3}$

for $u, v \in[0,2 \pi]$. Let $f$ be the Gauss map of $S$. Describe $f$ on the $\{y=0\}$ cross-section of $S$, and use this to write down an explicit formula for $f$.

Let $U$ be the upper hemisphere of the 2 -sphere $S^{2}$, and $K$ the Gauss curvature of $S$. Calculate $\int_{f^{-1}(U)} K d A$.

comment
• # Paper 1, Section II, E

Let $\mathcal{C}$ be the curve in the $(x, z)$-plane defined by the equation

$\left(x^{2}-1\right)^{2}+\left(z^{2}-1\right)^{2}=5 .$

Sketch $\mathcal{C}$, taking care with inflection points.

Let $S$ be the surface of revolution in $\mathbb{R}^{3}$ given by spinning $\mathcal{C}$ about the $z$-axis. Write down an equation defining $S$. Stating any result you use, show that $S$ is a smooth embedded surface.

Let $r$ be the radial coordinate on the $(x, y)$-plane. Show that the Gauss curvature of $S$ vanishes when $r=1$. Are these the only points at which the Gauss curvature of $S$ vanishes? Briefly justify your answer.

comment

• # Paper 1, Section II, G

State the structure theorem for a finitely generated module $M$ over a Euclidean domain $R$ in terms of invariant factors.

Let $V$ be a finite-dimensional vector space over a field $F$ and let $\alpha: V \rightarrow V$ be a linear map. Let $V_{\alpha}$ denote the $F[X]$-module $V$ with $X$ acting as $\alpha$. Apply the structure theorem to $V_{\alpha}$ to show the existence of a basis of $V$ with respect to which $\alpha$ has the rational canonical form. Prove that the minimal polynomial and the characteristic polynomial of $\alpha$ can be expressed in terms of the invariant factors. [Hint: For the characteristic polynomial apply suitable row operations.] Deduce the Cayley-Hamilton theorem for $\alpha$.

Now assume that $\alpha$ has matrix $\left(a_{i j}\right)$ with respect to the basis $v_{1}, \ldots, v_{n}$ of $V$. Let $M$ be the free $F[X]$-module of rank $n$ with free basis $m_{1}, \ldots, m_{n}$ and let $\theta: M \rightarrow V_{\alpha}$ be the unique homomorphism with $\theta\left(m_{i}\right)=v_{i}$ for $1 \leqslant i \leqslant n$. Using the fact, which you need not prove, that ker $\theta$ is generated by the elements $X m_{i}-\sum_{j=1}^{n} a_{j i} m_{j}, 1 \leqslant i \leqslant n$, find the invariant factors of $V_{\alpha}$ in the case that $V=\mathbb{R}^{3}$ and $\alpha$ is represented by the real matrix

$\left(\begin{array}{ccc} 0 & 1 & 0 \\ -4 & 4 & 0 \\ -2 & 1 & 2 \end{array}\right)$

with respect to the standard basis.

comment

• # Paper 1, Section I, F

Define what it means for two $n \times n$ matrices $A$ and $B$ to be similar. Define the Jordan normal form of a matrix.

Determine whether the matrices

$A=\left(\begin{array}{ccc} 4 & 6 & -15 \\ 1 & 3 & -5 \\ 1 & 2 & -4 \end{array}\right), \quad B=\left(\begin{array}{ccc} 1 & -3 & 3 \\ -2 & -6 & 13 \\ -1 & -4 & 8 \end{array}\right)$

are similar, carefully stating any theorem you use.

comment
• # Paper 1, Section II, F

Let $\mathcal{M}_{n}$ denote the vector space of $n \times n$ matrices over a field $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$. What is the $\operatorname{rank} r(A)$ of a matrix $A \in \mathcal{M}_{n}$ ?

Show, stating accurately any preliminary results that you require, that $r(A)=n$ if and only if $A$ is non-singular, i.e. $\operatorname{det} A \neq 0$.

Does $\mathcal{M}_{n}$ have a basis consisting of non-singular matrices? Justify your answer.

Suppose that an $n \times n$ matrix $A$ is non-singular and every entry of $A$ is either 0 or 1. Let $c_{n}$ be the largest possible number of 1 's in such an $A$. Show that $c_{n} \leqslant n^{2}-n+1$. Is this bound attained? Justify your answer.

[Standard properties of the adjugate matrix can be assumed, if accurately stated.]

comment

• # Paper 1, Section II, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with transition matrix $P$. What is a stopping time of $\left(X_{n}\right)_{n \geqslant 0}$ ? What is the strong Markov property?

A porter is trying to apprehend a student who is walking along a long narrow path at night. Being unaware of the porter, the student's location $Y_{n}$ at time $n \geqslant 0$ evolves as a simple symmetric random walk on $\mathbb{Z}$. The porter's initial location $Z_{0}$ is $2 m$ units to the right of the student, so $Z_{0}-Y_{0}=2 m$ where $m \geqslant 1$. The future locations $Z_{n+1}$ of the porter evolve as follows: The porter moves to the left (so $Z_{n+1}=Z_{n}-1$ ) with probability $q \in\left(\frac{1}{2}, 1\right)$, and to the right with probability $1-q$ whenever $Z_{n}-Y_{n}>2$. When $Z_{n}-Y_{n}=2$, the porter's probability of moving left changes to $r \in(0,1)$, and the probability of moving right is $1-r$.

(a) By setting up an appropriate Markov chain, show that for $m \geqslant 2$, the expected time for the porter to be a distance $2(m-1)$ away from the student is $2 /(2 q-1)$.

(b) Show that the expected time for the porter to catch the student, i.e. for their locations to coincide, is

$\frac{2}{r}+\left(m+\frac{1}{r}-2\right) \frac{2}{2 q-1} .$

[You may use without proof the fact that the time for the porter to catch the student is finite with probability 1 for any $m \geqslant 1$.]

comment

• # Paper 1, Section II, B

Consider the equation

$\nabla^{2} \phi=\delta(x) g(y)$

on the two-dimensional strip $-\infty, where $\delta(x)$ is the delta function and $g(y)$ is a smooth function satisfying $g(0)=g(a)=0 . \phi(x, y)$ satisfies the boundary conditions $\phi(x, 0)=\phi(x, a)=0$ and $\lim _{x \rightarrow \pm \infty} \phi(x, y)=0$. By using solutions of Laplace's equation for $x<0$ and $x>0$, matched suitably at $x=0$, find the solution of $(*)$ in terms of Fourier coefficients of $g(y)$.

Find the solution of $(*)$ in the limiting case $g(y)=\delta(y-c)$, where $0, and hence determine the Green's function $\phi(x, y)$ in the strip, satisfying

$\nabla^{2} \phi=\delta(x-b) \delta(y-c)$

and the same boundary conditions as before.

comment

• # Paper 1, Section I, C

(a) Find an $L U$ factorisation of the matrix

$A=\left[\begin{array}{cccc} 1 & 1 & 0 & 3 \\ 0 & 2 & 2 & 12 \\ 0 & 5 & 7 & 32 \\ 3 & -1 & -1 & -10 \end{array}\right]$

where the diagonal elements of $L$ are $L_{11}=L_{44}=1, L_{22}=L_{33}=2$.

(b) Use this factorisation to solve the linear system $A \mathbf{x}=\mathbf{b}$, where

$\mathbf{b}=\left[\begin{array}{c} -3 \\ -12 \\ -30 \\ 13 \end{array}\right]$

comment
• # Paper 1, Section II, C

(a) Given a set of $n+1$ distinct real points $x_{0}, x_{1}, \ldots, x_{n}$ and real numbers $f_{0}, f_{1}, \ldots, f_{n}$, show that the interpolating polynomial $p_{n} \in \mathbb{P}_{n}[x], p_{n}\left(x_{i}\right)=f_{i}$, can be written in the form

$p_{n}(x)=\sum_{k=0}^{n} a_{k} \prod_{j=0, j \neq k}^{n} \frac{x-x_{j}}{x_{k}-x_{j}}, \quad x \in \mathbb{R}$

where the coefficients $a_{k}$ are to be determined.

(b) Consider the approximation of the integral of a function $f \in C[a, b]$ by a finite sum,

$\int_{a}^{b} f(x) d x \approx \sum_{k=0}^{s-1} w_{k} f\left(c_{k}\right)$

where the weights $w_{0}, \ldots, w_{s-1}$ and nodes $c_{0}, \ldots, c_{s-1} \in[a, b]$ are independent of $f$. Derive the expressions for the weights $w_{k}$ that make the approximation ( 1$)$ exact for $f$ being any polynomial of degree $s-1$, i.e. $f \in \mathbb{P}_{s-1}[x]$.

Show that by choosing $c_{0}, \ldots, c_{s-1}$ to be zeros of the polynomial $q_{s}(x)$ of degree $s$, one of a sequence of orthogonal polynomials defined with respect to the scalar product

$\langle u, v\rangle=\int_{a}^{b} u(x) v(x) d x$

the approximation (1) becomes exact for $f \in \mathbb{P}_{2 s-1}[x]$ (i.e. for all polynomials of degree $2 s-1)$.

(c) On the interval $[a, b]=[-1,1]$ the scalar product (2) generates orthogonal polynomials given by

$q_{n}(x)=\frac{1}{2^{n} n !} \frac{d^{n}}{d x^{n}}\left(x^{2}-1\right)^{n}, \quad n=0,1,2, \ldots$

Find the values of the nodes $c_{k}$ for which the approximation (1) is exact for all polynomials of degree 7 (i.e. $f \in \mathbb{P}_{7}[x]$ ) but no higher.

comment

• # Paper 1, Section I, H

Solve the following Optimization problem using the simplex algorithm:

$\begin{array}{rr} \operatorname{maximise} & x_{1}+x_{2} \\ \text { subject to } & \left|x_{1}-2 x_{2}\right| \leqslant 2 \\ & 4 x_{1}+x_{2} \leqslant 4, \quad x_{1}, x_{2} \geqslant 0 \end{array}$

Suppose the constraints above are now replaced by $\left|x_{1}-2 x_{2}\right| \leqslant 2+\epsilon_{1}$ and $4 x_{1}+x_{2} \leqslant 4+\epsilon_{2}$. Give an expression for the maximum objective value that is valid for all sufficiently small non-zero $\epsilon_{1}$ and $\epsilon_{2}$.

comment

• # Paper 1, Section I,

Define what it means for an operator $Q$ to be hermitian and briefly explain the significance of this definition in quantum mechanics.

Define the uncertainty $(\Delta Q)_{\psi}$ of $Q$ in a state $\psi$. If $P$ is also a hermitian operator, show by considering the state $(Q+i \lambda P) \psi$, where $\lambda$ is a real number, that

$\left\langle Q^{2}\right\rangle_{\psi}\left\langle P^{2}\right\rangle_{\psi} \geqslant \frac{1}{4}\left|\langle i[Q, P]\rangle_{\psi}\right|^{2}$

Hence deduce that

$(\Delta Q)_{\psi}(\Delta P)_{\psi} \geqslant \frac{1}{2}\left|\langle i[Q, P]\rangle_{\psi}\right|$

Give a physical interpretation of this result.

comment
• # Paper 1, Section II, A

Consider a quantum system with Hamiltonian $H$ and wavefunction $\Psi$ obeying the time-dependent Schrödinger equation. Show that if $\Psi$ is a stationary state then $\langle Q\rangle_{\Psi}$ is independent of time, if the observable $Q$ is independent of time.

A particle of mass $m$ is confined to the interval $0 \leqslant x \leqslant a$ by infinite potential barriers, but moves freely otherwise. Let $\Psi(x, t)$ be the normalised wavefunction for the particle at time $t$, with

$\Psi(x, 0)=c_{1} \psi_{1}(x)+c_{2} \psi_{2}(x)$

where

$\psi_{1}(x)=\left(\frac{2}{a}\right)^{1 / 2} \sin \frac{\pi x}{a}, \quad \psi_{2}(x)=\left(\frac{2}{a}\right)^{1 / 2} \sin \frac{2 \pi x}{a}$

and $c_{1}, c_{2}$ are complex constants. If the energy of the particle is measured at time $t$, what are the possible results, and what is the probability for each result to be obtained? Give brief justifications of your answers.

Calculate $\langle\hat{x}\rangle_{\Psi}$ at time $t$ and show that the result oscillates with a frequency $\omega$, to be determined. Show in addition that

$\left|\langle\hat{x}\rangle_{\Psi}-\frac{a}{2}\right| \leqslant \frac{16 a}{9 \pi^{2}} .$

comment

• # Paper 1, Section I, $\mathbf{6 H}$

Suppose $X_{1}, \ldots, X_{n}$ are independent with distribution $N(\mu, 1)$. Suppose a prior $\mu \sim N\left(\theta, \tau^{-2}\right)$ is placed on the unknown parameter $\mu$ for some given deterministic $\theta \in \mathbb{R}$ and $\tau>0$. Derive the posterior mean.

Find an expression for the mean squared error of this posterior mean when $\theta=0$.

comment
• # Paper 1, Section II, H

Let $X_{1}, \ldots, X_{n}$ be i.i.d. $U[0,2 \theta]$ random variables, where $\theta>0$ is unknown.

(a) Derive the maximum likelihood estimator $\hat{\theta}$ of $\theta$.

(b) What is a sufficient statistic? What is a minimal sufficient statistic? Is $\hat{\theta}$ sufficient for $\theta$ ? Is it minimal sufficient? Answer the same questions for the sample mean $\tilde{\theta}:=\sum_{i=1}^{n} X_{i} / n$. Briefly justify your answers.

[You may use any result from the course provided it is stated clearly.]

(c) Show that the mean squared errors of $\hat{\theta}$ and $\tilde{\theta}$ are respectively

$\frac{2 \theta^{2}}{(n+1)(n+2)} \quad \text { and } \quad \frac{\theta^{2}}{3 n} \text {. }$

(d) Show that for each $t \in \mathbb{R}, \lim _{n \rightarrow \infty} \mathbb{P}(n(1-\hat{\theta} / \theta) \geqslant t)=h(t)$ for a function $h$ you should specify. Give, with justification, an approximate $1-\alpha$ confidence interval for $\theta$ whose expected length is

$\left(\frac{n \theta}{n+1}\right)\left(\frac{\log (1 / \alpha)}{n-\log (1 / \alpha)}\right)$

[Hint: $\lim _{n \rightarrow \infty}\left(1-\frac{t}{n}\right)^{n}=e^{-t}$ for all $t \in \mathbb{R}$.]

comment

• # Paper 1, Section II, D

A motion sensor sits at the origin, in the middle of a field. The probability that you are detected as you sneak from one point to another along a path $\mathbf{x}(t): 0 \leqslant t \leqslant T$ is

$P[\mathbf{x}(t)]=\lambda \int_{0}^{T} \frac{v(t)}{r(t)} d t$

where $\lambda$ is a positive constant, $r(t)$ is your distance to the sensor, and $v(t)$ is your speed. (If $P[\mathbf{x}(t)] \geqslant 1$ for some path then you are detected with certainty.)

You start at point $(x, y)=(A, 0)$, where $A>0$. Your mission is to reach the point $(x, y)=(B \cos \alpha, B \sin \alpha)$, where $B>0$. What path should you take to minimise the chance of detection? Should you tiptoe or should you run?

A new and improved sensor detects you with probability

$\tilde{P}[\mathbf{x}(t)]=\lambda \int_{0}^{T} \frac{v(t)^{2}}{r(t)} d t$

Show that the optimal path now satisfies the equation

$\left(\frac{d r}{d t}\right)^{2}=E r-h^{2}$

for some constants $E$ and $h$ that you should identify.

comment