# Part IB, 2013

### Jump to course

Paper 1, Section II, F

commentDefine what it means for a sequence of functions $k_{n}: A \rightarrow \mathbb{R}, n=1,2, \ldots$, to converge uniformly on an interval $A \subset \mathbb{R}$.

By considering the functions $k_{n}(x)=\frac{\sin (n x)}{\sqrt{n}}$, or otherwise, show that uniform convergence of a sequence of differentiable functions does not imply uniform convergence of their derivatives.

Now suppose $k_{n}(x)$ is continuously differentiable on $A$ for each $n$, that $k_{n}\left(x_{0}\right)$ converges as $n \rightarrow \infty$ for some $x_{0} \in A$, and moreover that the derivatives $k_{n}^{\prime}(x)$ converge uniformly on $A$. Prove that $k_{n}(x)$ converges to a continuously differentiable function $k(x)$ on $A$, and that

$k^{\prime}(x)=\lim _{n \rightarrow \infty} k_{n}^{\prime}(x)$

Hence, or otherwise, prove that the function

$\sum_{n=1}^{\infty} \frac{x^{n} \sin (n x)}{n^{3}+1}$

is continuously differentiable on $(-1,1)$.

Paper 2, Section I, F

commentLet $\mathcal{C}[a, b]$ denote the vector space of continuous real-valued functions on the interval $[a, b]$, and let $\mathcal{C}^{\prime}[a, b]$ denote the subspace of continuously differentiable functions.

Show that $\|f\|_{1}=\max |f|+\max \left|f^{\prime}\right|$ defines a norm on $\mathcal{C}^{\prime}[a, b]$. Show furthermore that the map $\Phi: f \mapsto f^{\prime}((a+b) / 2)$ takes the closed unit ball $\left\{\|f\|_{1} \leqslant 1\right\} \subset \mathcal{C}^{\prime}[a, b]$ to a bounded subset of $\mathbb{R}$.

If instead we had used the norm $\|f\|_{0}=\max |f|$ restricted from $\mathcal{C}[a, b]$ to $\mathcal{C}^{\prime}[a, b]$, would $\Phi$ take the closed unit ball $\left\{\|f\|_{0} \leqslant 1\right\} \subset \mathcal{C}^{\prime}[a, b]$ to a bounded subset of $\mathbb{R}$ ? Justify your answer.

Paper 2, Section II, F

commentLet $f: U \rightarrow \mathbb{R}$ be continuous on an open set $U \subset \mathbb{R}^{2}$. Suppose that on $U$ the partial derivatives $D_{1} f, D_{2} f, D_{1} D_{2} f$ and $D_{2} D_{1} f$ exist and are continuous. Prove that $D_{1} D_{2} f=D_{2} D_{1} f$ on $U$.

If $f$ is infinitely differentiable, and $m \in \mathbb{N}$, what is the maximum number of distinct $m$-th order partial derivatives that $f$ may have on $U$ ?

Let $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$ be defined by

$f(x, y)= \begin{cases}\frac{x^{2} y^{2}}{x^{4}+y^{4}} & (x, y) \neq(0,0) \\ 0 & (x, y)=(0,0)\end{cases}$

Let $g: \mathbb{R}^{2} \rightarrow \mathbb{R}$ be defined by

$g(x, y)= \begin{cases}\frac{x y\left(x^{4}-y^{4}\right)}{x^{4}+y^{4}} & (x, y) \neq(0,0) \\ 0 & (x, y)=(0,0)\end{cases}$

For each of $f$ and $g$, determine whether they are (i) differentiable, (ii) infinitely differentiable at the origin. Briefly justify your answers.

Paper 3, Section I, $2 F$

commentFor each of the following sequences of functions on $[0,1]$, indexed by $n=1,2, \ldots$, determine whether or not the sequence has a pointwise limit, and if so, determine whether or not the convergence to the pointwise limit is uniform.

$f_{n}(x)=1 /\left(1+n^{2} x^{2}\right)$

$g_{n}(x)=n x(1-x)^{n}$

$h_{n}(x)=\sqrt{n} x(1-x)^{n}$

Paper 3, Section II, F

commentFor each of the following statements, provide a proof or justify a counterexample.

The norms $\|x\|_{1}=\sum_{i=1}^{n}\left|x_{i}\right|$ and $\|x\|_{\infty}=\max _{1 \leqslant i \leqslant n}\left|x_{i}\right|$ on $\mathbb{R}^{n}$ are Lipschitz equivalent.

The norms $\|x\|_{1}=\sum_{i=1}^{\infty}\left|x_{i}\right|$ and $\|x\|_{\infty}=\max _{i}\left|x_{i}\right|$ on the vector space of sequences $\left(x_{i}\right)_{i \geqslant 1}$ with $\sum\left|x_{i}\right|<\infty$ are Lipschitz equivalent.

Given a linear function $\phi: V \rightarrow W$ between normed real vector spaces, there is some $N$ for which $\|\phi(x)\| \leqslant N$ for every $x \in V$ with $\|x\| \leqslant 1$.

Given a linear function $\phi: V \rightarrow W$ between normed real vector spaces for which there is some $N$ for which $\|\phi(x)\| \leqslant N$ for every $x \in V$ with $\|x\| \leqslant 1$, then $\phi$ is continuous.

The uniform norm $\|f\|=\sup _{x \in \mathbb{R}}|f(x)|$ is complete on the vector space of continuous real-valued functions $f$ on $\mathbb{R}$ for which $f(x)=0$ for $|x|$ sufficiently large.

The uniform norm $\|f\|=\sup _{x \in \mathbb{R}}|f(x)|$ is complete on the vector space of continuous real-valued functions $f$ on $\mathbb{R}$ which are bounded.

Paper 4, Section I, $3 F$

commentState and prove the chain rule for differentiable mappings $F: \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$ and $G: \mathbb{R}^{m} \rightarrow \mathbb{R}^{k}$.

Suppose now $F: \mathbb{R}^{2} \rightarrow \mathbb{R}^{2}$ has image lying on the unit circle in $\mathbb{R}^{2}$. Prove that the determinant $\operatorname{det}\left(\left.D F\right|_{x}\right)$ vanishes for every $x \in \mathbb{R}^{2}$.

Paper 4, Section II, F

commentState the contraction mapping theorem.

A metric space $(X, d)$ is bounded if $\{d(x, y) \mid x, y \in X\}$ is a bounded subset of $\mathbb{R}$. Suppose $(X, d)$ is complete and bounded. Let $\operatorname{Maps}(X, X)$ denote the set of continuous $\operatorname{maps}$ from $X$ to itself. For $f, g \in \operatorname{Maps}(X, X)$, let

$\delta(f, g)=\sup _{x \in X} d(f(x), g(x))$

Prove that $(\operatorname{Maps}(X, X), \delta)$ is a complete metric space. Is the subspace $\mathcal{C} \subset \operatorname{Maps}(X, X)$ of contraction mappings a complete subspace?

Let $\tau: \mathcal{C} \rightarrow X$ be the map which associates to any contraction its fixed point. Prove that $\tau$ is continuous.

Paper 3, Section II, E

commentLet $D=\{z \in \mathbb{C}|| z \mid<1\}$ be the open unit disk, and let $C$ be its boundary (the unit circle), with the anticlockwise orientation. Suppose $\phi: C \rightarrow \mathbb{C}$ is continuous. Stating clearly any theorems you use, show that

$g_{\phi}(w)=\frac{1}{2 \pi i} \int_{C} \frac{\phi(z)}{z-w} d z$

is an analytic function of $w$ for $w \in D$.

Now suppose $\phi$ is the restriction of a holomorphic function $F$ defined on some annulus $1-\epsilon<|z|<1+\epsilon$. Show that $g_{\phi}(w)$ is the restriction of a holomorphic function defined on the open disc $|w|<1+\epsilon$.

Let $f_{\phi}:[0,2 \pi] \rightarrow \mathbb{C}$ be defined by $f_{\phi}(\theta)=\phi\left(e^{i \theta}\right)$. Express the coefficients in the power series expansion of $g_{\phi}$ centered at 0 in terms of $f_{\phi}$.

Let $n \in \mathbb{Z}$. What is $g_{\phi}$ in the following cases?

$\phi(z)=z^{n}$.

$\phi(z)=\bar{z}^{n}$.

$\phi(z)=(\operatorname{Re} z)^{2}$.

Paper 4, Section I, E

commentState Rouché's theorem. How many roots of the polynomial $z^{8}+3 z^{7}+6 z^{2}+1$ are contained in the annulus $1<|z|<2$ ?

Paper 1, Section I, $2 \mathrm{D}$

commentClassify the singularities (in the finite complex plane) of the following functions: (i) $\frac{1}{(\cosh z)^{2}}$; (ii) $\frac{1}{\cos (1 / z)}$; (iii) $\frac{1}{\log z} \quad(-\pi<\arg z<\pi)$; (iv) $\frac{z^{\frac{1}{2}}-1}{\sin \pi z} \quad(-\pi<\arg z<\pi)$.

Paper 1, Section II, E

commentSuppose $p(z)$ is a polynomial of even degree, all of whose roots satisfy $|z|<R$. Explain why there is a holomorphic (i.e. analytic) function $h(z)$ defined on the region $R<|z|<\infty$ which satisfies $h(z)^{2}=p(z)$. We write $h(z)=\sqrt{p(z)}$

By expanding in a Laurent series or otherwise, evaluate

$\int_{C} \sqrt{z^{4}-z} d z$

where $C$ is the circle of radius 2 with the anticlockwise orientation. (Your answer will be well-defined up to a factor of $\pm 1$, depending on which square root you pick.)

Paper 2, Section II, 13D Let

comment$I=\oint_{C} \frac{e^{i z^{2} / \pi}}{1+e^{-2 z}} d z$

where $C$ is the rectangle with vertices at $\pm R$ and $\pm R+i \pi$, traversed anti-clockwise.

(i) Show that $I=\frac{\pi(1+i)}{\sqrt{2}}$.

(ii) Assuming that the contribution to $I$ from the vertical sides of the rectangle is negligible in the limit $R \rightarrow \infty$, show that

$\int_{-\infty}^{\infty} e^{i x^{2} / \pi} d x=\frac{\pi(1+i)}{\sqrt{2}}$

(iii) Justify briefly the assumption that the contribution to $I$ from the vertical sides of the rectangle is negligible in the limit $R \rightarrow \infty$.

Paper 3, Section I, D

commentLet $y(t)=0$ for $t<0$, and let $\lim _{t \rightarrow 0^{+}} y(t)=y_{0}$.

(i) Find the Laplace transforms of $H(t)$ and $t H(t)$, where $H(t)$ is the Heaviside step function.

(ii) Given that the Laplace transform of $y(t)$ is $\widehat{y}(s)$, find expressions for the Laplace transforms of $\dot{y}(t)$ and $y(t-1)$.

(iii) Use Laplace transforms to solve the equation

$\dot{y}(t)-y(t-1)=H(t)-(t-1) H(t-1)$

in the case $y_{0}=0$.

Paper 4, Section II, D

commentLet $C_{1}$ and $C_{2}$ be the circles $x^{2}+y^{2}=1$ and $5 x^{2}-4 x+5 y^{2}=0$, respectively, and let $D$ be the (finite) region between the circles. Use the conformal mapping

$w=\frac{z-2}{2 z-1}$

to solve the following problem:

$\nabla^{2} \phi=0 \text { in } D \text { with } \phi=1 \text { on } C_{1} \text { and } \phi=2 \text { on } C_{2}$

Paper 1, Section II, $16 \mathrm{D}$

commentBriefly explain the main assumptions leading to Drude's theory of conductivity. Show that these assumptions lead to the following equation for the average drift velocity $\langle\mathbf{v}(t)\rangle$ of the conducting electrons:

$\frac{d\langle\mathbf{v}\rangle}{d t}=-\tau^{-1}\langle\mathbf{v}\rangle+(e / m) \mathbf{E}$

where $m$ and $e$ are the mass and charge of each conducting electron, $\tau^{-1}$ is the probability that a given electron collides with an ion in unit time, and $\mathbf{E}$ is the applied electric field.

Given that $\langle\mathbf{v}\rangle=\mathbf{v}_{0} e^{-i \omega t}$ and $\mathbf{E}=\mathbf{E}_{0} e^{-i \omega t}$, where $\mathbf{v}_{0}$ and $\mathbf{E}_{0}$ are independent of $t$, show that

$\mathbf{J}=\sigma \mathbf{E}$

Here, $\sigma=\sigma_{s} /(1-i \omega \tau), \sigma_{s}=n e^{2} \tau / m$ and $n$ is the number of conducting electrons per unit volume.

Now let $\mathbf{v}_{0}=\widetilde{\mathbf{v}}_{0} e^{i \mathbf{k} \cdot \mathbf{x}}$ and $\mathbf{E}_{0}=\widetilde{\mathbf{E}}_{0} e^{i \mathbf{k} \cdot \mathbf{x}}$, where $\widetilde{\mathbf{v}}_{0}$ and $\widetilde{\mathbf{E}}_{0}$ are constant. Assuming that $(*)$ remains valid, use Maxwell's equations (taking the charge density to be everywhere zero but allowing for a non-zero current density) to show that

$k^{2}=\frac{\omega^{2}}{c^{2}} \epsilon_{r}$

where the relative permittivity $\epsilon_{r}=1+i \sigma /\left(\omega \epsilon_{0}\right)$ and $k=|\mathbf{k}|$.

In the case $\omega \tau \gg 1$ and $\omega<\omega_{p}$, where $\omega_{p}^{2}=\sigma_{s} / \tau \epsilon_{0}$, show that the wave decays exponentially with distance inside the conductor.

Paper 2, Section I, D

commentUse Maxwell's equations to obtain the equation of continuity

$\frac{\partial \rho}{\partial t}+\nabla \cdot \mathbf{J}=0$

Show that, for a body made from material of uniform conductivity $\sigma$, the charge density at any fixed internal point decays exponentially in time. If the body is finite and isolated, explain how this result can be consistent with overall charge conservation.

Paper 2, Section II, D

commentStarting with the expression

$\mathbf{A}(\mathbf{r})=\frac{\mu_{0}}{4 \pi} \int \frac{\mathbf{J}\left(\mathbf{r}^{\prime}\right) d V^{\prime}}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|}$

for the magnetic vector potential at the point $r$ due to a current distribution of density $\mathbf{J}(\mathbf{r})$, obtain the Biot-Savart law for the magnetic field due to a current $I$ flowing in a simple loop $C$ :

$\mathbf{B}(\mathbf{r})=-\frac{\mu_{0} I}{4 \pi} \oint_{C} \frac{d \mathbf{r}^{\prime} \times\left(\mathbf{r}^{\prime}-\mathbf{r}\right)}{\left|\mathbf{r}^{\prime}-\mathbf{r}\right|^{3}} \quad(\mathbf{r} \notin C) .$

Verify by direct differentiation that this satisfies $\boldsymbol{\nabla} \times \mathbf{B}=\mathbf{0}$. You may use without proof the identity $\boldsymbol{\nabla} \times(\mathbf{a} \times \mathbf{v})=\mathbf{a}(\boldsymbol{\nabla} \cdot \mathbf{v})-(\mathbf{a} \cdot \boldsymbol{\nabla}) \mathbf{v}$, where $\mathbf{a}$ is a constant vector and $\mathbf{v}$ is a vector field.

Given that $C$ is planar, and is described in cylindrical polar coordinates by $z=0$, $r=f(\theta)$, show that the magnetic field at the origin is

$\widehat{\mathbf{z}} \frac{\mu_{0} I}{4 \pi} \oint \frac{d \theta}{f(\theta)}$

If $C$ is the ellipse $r(1-e \cos \theta)=\ell$, find the magnetic field at the focus due to a current $I$.

Paper 3, Section II, D

commentThree sides of a closed rectangular circuit $C$ are fixed and one is moving. The circuit lies in the plane $z=0$ and the sides are $x=0, y=0, x=a(t), y=b$, where $a(t)$ is a given function of time. A magnetic field $\mathbf{B}=\left(0,0, \frac{\partial f}{\partial x}\right)$ is applied, where $f(x, t)$ is a given function of $x$ and $t$ only. Find the magnetic flux $\Phi$ of $\mathbf{B}$ through the surface $S$ bounded by $C$.

Find an electric field $\mathbf{E}_{\mathbf{0}}$ that satisfies the Maxwell equation

$\boldsymbol{\nabla} \times \mathbf{E}=-\frac{\partial \mathbf{B}}{\partial t}$

and then write down the most general solution $\mathbf{E}$ in terms of $\mathbf{E}_{0}$ and an undetermined scalar function independent of $f$.

Verify that

$\oint_{C}(\mathbf{E}+\mathbf{v} \times \mathbf{B}) \cdot d \mathbf{r}=-\frac{d \Phi}{d t},$

where $\mathbf{v}$ is the velocity of the relevant side of $C$. Interpret the left hand side of this equation.

If a unit current flows round $C$, what is the rate of work required to maintain the motion of the moving side of the rectangle? You should ignore any electromagnetic fields produced by the current.

Paper 4, Section I, D

commentThe infinite plane $z=0$ is earthed and the infinite plane $z=d$ carries a charge of $\sigma$ per unit area. Find the electrostatic potential between the planes.

Show that the electrostatic energy per unit area (of the planes $z=$ constant) between the planes can be written as either $\frac{1}{2} \sigma^{2} d / \epsilon_{0}$ or $\frac{1}{2} \epsilon_{0} V^{2} / d$, where $V$ is the potential at $z=d$.

The distance between the planes is now increased by $\alpha d$, where $\alpha$ is small. Show that the change in the energy per unit area is $\frac{1}{2} \sigma V \alpha$ if the upper plane $(z=d)$ is electrically isolated, and is approximately $-\frac{1}{2} \sigma V \alpha$ if instead the potential on the upper plane is maintained at $V$. Explain briefly how this difference can be accounted for.

Paper 1, Section I, A

commentA two-dimensional flow is given by

$\mathbf{u}=(x,-y+t)$

Show that the flow is both irrotational and incompressible. Find a stream function $\psi(x, y)$ such that $\mathbf{u}=\left(\frac{\partial \psi}{\partial y},-\frac{\partial \psi}{\partial x}\right)$. Sketch the streamlines at $t=0$.

Find the pathline of a fluid particle that passes through $\left(x_{0}, y_{0}\right)$ at $t=0$ in the form $y=f\left(x, x_{0}, y_{0}\right)$ and sketch the pathline for $x_{0}=1, y_{0}=1 .$

Paper 1, Section II, A

commentStarting from the Euler momentum equation, derive the form of Bernoulli's equation appropriate for an unsteady irrotational motion of an inviscid incompressible fluid.

Water of density $\rho$ is driven through a horizontal tube of length $L$ and internal radius $a$ from a water-filled balloon attached to one end of the tube. Assume that the pressure exerted by the balloon is proportional to its current volume (in excess of atmospheric pressure). Also assume that water exits the tube at atmospheric pressure, and that gravity may be neglected. Show that the time for the balloon to empty does not depend on its initial volume. Find the maximum speed of water exiting the pipe.

Paper 2, Section I, A

commentAn incompressible, inviscid fluid occupies the region beneath the free surface $y=\eta(x, t)$ and moves with a velocity field determined by the velocity potential $\phi(x, y, t) .$ Gravity acts in the $-y$ direction. You may assume Bernoulli's integral of the equation of motion:

$\frac{p}{\rho}+\frac{\partial \phi}{\partial t}+\frac{1}{2}|\nabla \phi|^{2}+g y=F(t)$

Give the kinematic and dynamic boundary conditions that must be satisfied by $\phi$ on $y=\eta(x, t)$.

In the absence of waves, the fluid has constant uniform velocity $U$ in the $x$ direction. Derive the linearised form of the boundary conditions for small amplitude waves.

Assume that the free surface and velocity potential are of the form:

$\begin{aligned} \eta &=a e^{i(k x-\omega t)} \\ \phi &=U x+i b e^{k y} e^{i(k x-\omega t)} \end{aligned}$

(where implicitly the real parts are taken). Show that

$(\omega-k U)^{2}=g k$

Paper 3, Section II, A

commentA layer of incompressible fluid of density $\rho$ and viscosity $\mu$ flows steadily down a plane inclined at an angle $\theta$ to the horizontal. The layer is of uniform thickness $h$ measured perpendicular to the plane and the viscosity of the overlying air can be neglected. Using coordinates $x$ parallel to the plane (in steepest downwards direction) and $y$ normal to the plane, write down the equations of motion and the boundary conditions on the plane and on the free top surface. Determine the pressure and velocity fields and show that the volume flux down the plane is

$\frac{\rho g h^{3} \sin \theta}{3 \mu}$

Consider now the case where a second layer of fluid, of uniform thickness $\alpha h$, viscosity $\beta \mu$ and density $\rho$, flows steadily on top of the first layer. Explain why one of the appropriate boundary conditions between the two fluids is

$\mu \frac{\partial}{\partial y} u\left(h_{-}\right)=\beta \mu \frac{\partial}{\partial y} u\left(h_{+}\right),$

where $u$ is the component of velocity in the $x$ direction and $h_{-}$and $h_{+}$refer to just below and just above the boundary respectively. Determine the velocity field in each layer.

Paper 4, Section II, A

commentThe axisymmetric, irrotational flow generated by a solid sphere of radius $a$ translating at velocity $U$ in an inviscid, incompressible fluid is represented by a velocity potential $\phi(r, \theta)$. Assume the fluid is at rest far away from the sphere. Explain briefly why $\nabla^{2} \phi=0$.

By trying a solution of the form $\phi(r, \theta)=f(r) g(\theta)$, show that

$\phi=-\frac{U a^{3} \cos \theta}{2 r^{2}}$

and write down the fluid velocity.

Show that the total kinetic energy of the fluid is $k M U^{2} / 4$ where $M$ is the mass of the sphere and $k$ is the ratio of the density of the fluid to the density of the sphere.

A heavy sphere (i.e. $k<1$ ) is released from rest in an inviscid fluid. Determine its speed after it has fallen a distance $h$ in terms of $M, k, g$ and $h$.

Note, in spherical polars:

$\begin{gathered} \boldsymbol{\nabla} \phi=\frac{\partial \phi}{\partial r} \mathbf{e}_{\mathbf{r}}+\frac{1}{r} \frac{\partial \phi}{\partial \theta} \mathbf{e}_{\theta} \\ \nabla^{2} \phi=\frac{1}{r^{2}} \frac{\partial}{\partial r}\left(r^{2} \frac{\partial \phi}{\partial r}\right)+\frac{1}{r^{2} \sin \theta} \frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial \phi}{\partial \theta}\right) \end{gathered}$

Paper 1, Section I, F

commentLet $l_{1}$ and $l_{2}$ be ultraparallel geodesics in the hyperbolic plane. Prove that the $l_{i}$ have a unique common perpendicular.

Suppose now $l_{1}, l_{2}, l_{3}$ are pairwise ultraparallel geodesics in the hyperbolic plane. Can the three common perpendiculars be pairwise disjoint? Must they be pairwise disjoint? Briefly justify your answers.

Paper 2, Section II, F

commentLet $A$ and $B$ be disjoint circles in $\mathbb{C}$. Prove that there is a Möbius transformation which takes $A$ and $B$ to two concentric circles.

A collection of circles $X_{i} \subset \mathbb{C}, 0 \leqslant i \leqslant n-1$, for which

$X_{i}$ is tangent to $A, B$ and $X_{i+1}$, where indices are $\bmod n$;

the circles are disjoint away from tangency points;

is called a constellation on $(A, B)$. Prove that for any $n \geqslant 2$ there is some pair $(A, B)$ and a constellation on $(A, B)$ made up of precisely $n$ circles. Draw a picture illustrating your answer.

Given a constellation on $(A, B)$, prove that the tangency points $X_{i} \cap X_{i+1}$ for $0 \leqslant i \leqslant n-1$ all lie on a circle. Moreover, prove that if we take any other circle $Y_{0}$ tangent to $A$ and $B$, and then construct $Y_{i}$ for $i \geqslant 1$ inductively so that $Y_{i}$ is tangent to $A, B$ and $Y_{i-1}$, then we will have $Y_{n}=Y_{0}$, i.e. the chain of circles will again close up to form a constellation.

Paper 3, Section I, F

commentLet $S$ be a surface with Riemannian metric having first fundamental form $d u^{2}+G(u, v) d v^{2}$. State a formula for the Gauss curvature $K$ of $S$.

Suppose that $S$ is flat, so $K$ vanishes identically, and that $u=0$ is a geodesic on $S$ when parametrised by arc-length. Using the geodesic equations, or otherwise, prove that $G(u, v) \equiv 1$, i.e. $S$ is locally isometric to a plane.

Paper 3, Section II, F

commentShow that the set of all straight lines in $\mathbb{R}^{2}$ admits the structure of an abstract smooth surface $S$. Show that $S$ is an open Möbius band (i.e. the Möbius band without its boundary circle), and deduce that $S$ admits a Riemannian metric with vanishing Gauss curvature.

Show that there is no metric $d: S \times S \rightarrow \mathbb{R}_{\geqslant 0}$, in the sense of metric spaces, which

induces the locally Euclidean topology on $S$ constructed above;

is invariant under the natural action on $S$ of the group of translations of $\mathbb{R}^{2}$.

Show that the set of great circles on the two-dimensional sphere admits the structure of a smooth surface $S^{\prime}$. Is $S^{\prime}$ homeomorphic to $S$ ? Does $S^{\prime}$ admit a Riemannian metric with vanishing Gauss curvature? Briefly justify your answers.

Paper 4, Section II, F

commentLet $\eta$ be a smooth curve in the $x z$-plane $\eta(s)=(f(s), 0, g(s))$, with $f(s)>0$ for every $s \in \mathbb{R}$ and $f^{\prime}(s)^{2}+g^{\prime}(s)^{2}=1$. Let $S$ be the surface obtained by rotating $\eta$ around the $z$-axis. Find the first fundamental form of $S$.

State the equations for a curve $\gamma:(a, b) \rightarrow S$ parametrised by arc-length to be a geodesic.

A parallel on $S$ is the closed circle swept out by rotating a single point of $\eta$. Prove that for every $n \in \mathbb{Z}_{>0}$ there is some $\eta$ for which exactly $n$ parallels are geodesics. Sketch possible such surfaces $S$ in the cases $n=1$ and $n=2$.

If every parallel is a geodesic, what can you deduce about $S$ ? Briefly justify your answer.

Paper 1, Section II, G

comment(i) Consider the group $G=G L_{2}(\mathbb{R})$ of all 2 by 2 matrices with entries in $\mathbb{R}$ and non-zero determinant. Let $T$ be its subgroup consisting of all diagonal matrices, and $N$ be the normaliser of $T$ in $G$. Show that $N$ is generated by $T$ and $\left(\begin{array}{ll}0 & 1 \\ 1 & 0\end{array}\right)$, and determine the quotient group $N / T$.

(ii) Now let $p$ be a prime number, and $F$ be the field of integers modulo $p$. Consider the group $G=G L_{2}(F)$ as above but with entries in $F$, and define $T$ and $N$ similarly. Find the order of the group $N$.

Paper 2, Section I, G

commentShow that every Euclidean domain is a PID. Define the notion of a Noetherian ring, and show that $\mathbb{Z}[i]$ is Noetherian by using the fact that it is a Euclidean domain.

Paper 2, Section II, G

comment(i) State the structure theorem for finitely generated modules over Euclidean domains.

(ii) Let $\mathbb{C}[X]$ be the polynomial ring over the complex numbers. Let $M$ be a $\mathbb{C}[X]$ module which is 4-dimensional as a $\mathbb{C}$-vector space and such that $(X-2)^{4} \cdot x=0$ for all $x \in M$. Find all possible forms we obtain when we write $M \cong \bigoplus_{i=1}^{m} \mathbb{C}[X] /\left(P_{i}^{n_{i}}\right)$ for irreducible $P_{i} \in \mathbb{C}[X]$ and $n_{i} \geqslant 1$.

(iii) Consider the quotient ring $M=\mathbb{C}[X] /\left(X^{3}+X\right)$ as a $\mathbb{C}[X]$-module. Show that $M$ is isomorphic as a $\mathbb{C}[X]$-module to the direct sum of three copies of $\mathbb{C}$. Give the isomorphism and its inverse explicitly.

Paper 3, Section I, $1 G$

commentDefine the notion of a free module over a ring. When $R$ is a PID, show that every ideal of $R$ is free as an $R$-module.

Paper 3, Section II, G

commentLet $R=\mathbb{C}[X, Y]$ be the polynomial ring in two variables over the complex numbers, and consider the principal ideal $I=\left(X^{3}-Y^{2}\right)$ of $R$.

(i) Using the fact that $R$ is a UFD, show that $I$ is a prime ideal of $R$. [Hint: Elements in $\mathbb{C}[X, Y]$ are polynomials in $Y$ with coefficients in $\mathbb{C}[X] .]$

(ii) Show that $I$ is not a maximal ideal of $R$, and that it is contained in infinitely many distinct proper ideals in $R$.

Paper 4, Section I, $2 G$

commentLet $p$ be a prime number, and $G$ be a non-trivial finite group whose order is a power of $p$. Show that the size of every conjugacy class in $G$ is a power of $p$. Deduce that the centre $Z$ of $G$ has order at least $p$.

Paper 4, Section II, 11G

commentLet $R$ be an integral domain, and $M$ be a finitely generated $R$-module.

(i) Let $S$ be a finite subset of $M$ which generates $M$ as an $R$-module. Let $T$ be a maximal linearly independent subset of $S$, and let $N$ be the $R$-submodule of $M$ generated by $T$. Show that there exists a non-zero $r \in R$ such that $r x \in N$ for every $x \in M$.

(ii) Now assume $M$ is torsion-free, i.e. $r x=0$ for $r \in R$ and $x \in M$ implies $r=0$ or $x=0$. By considering the map $M \rightarrow N$ mapping $x$ to $r x$ for $r$ as in (i), show that every torsion-free finitely generated $R$-module is isomorphic to an $R$-submodule of a finitely generated free $R$-module.

Paper 1, Section I, E

commentWhat is the adjugate of an $n \times n$ matrix $A$ ? How is it related to $A^{-1}$ ? Suppose all the entries of $A$ are integers. Show that all the entries of $A^{-1}$ are integers if and only if $\operatorname{det} A=\pm 1$.

Paper 1, Section II, E

commentIf $V_{1}$ and $V_{2}$ are vector spaces, what is meant by $V_{1} \oplus V_{2}$ ? If $V_{1}$ and $V_{2}$ are subspaces of a vector space $V$, what is meant by $V_{1}+V_{2}$ ?

Stating clearly any theorems you use, show that if $V_{1}$ and $V_{2}$ are subspaces of a finite dimensional vector space $V$, then

$\operatorname{dim} V_{1}+\operatorname{dim} V_{2}=\operatorname{dim}\left(V_{1} \cap V_{2}\right)+\operatorname{dim}\left(V_{1}+V_{2}\right)$

Let $V_{1}, V_{2} \subset \mathbb{R}^{4}$ be subspaces with bases

$\begin{gathered} V_{1}=\langle(3,2,4,-1),(1,2,1,-2),(-2,3,3,2)\rangle \\ V_{2}=\langle(1,4,2,4),(-1,1,-1,-1),(3,1,2,0)\rangle . \end{gathered}$

Find a basis $\left\langle\mathbf{v}_{1}, \mathbf{v}_{2}\right\rangle$ for $V_{1} \cap V_{2}$ such that the first component of $\mathbf{v}_{1}$ and the second component of $\mathbf{v}_{2}$ are both 0 .

Paper 2, Section I, E

commentIf $A$ is an $n \times n$ invertible Hermitian matrix, let

$U_{A}=\left\{U \in M_{n \times n}(\mathbb{C}) \mid \bar{U}^{T} A U=A\right\}$

Show that $U_{A}$ with the operation of matrix multiplication is a group, and that det $U$ has norm 1 for any $U \in U_{A}$. What is the relation between $U_{A}$ and the complex Hermitian form defined by $A$ ?

If $A=I_{n}$ is the $n \times n$ identity matrix, show that any element of $U_{A}$ is diagonalizable.

Paper 2, Section II, E

commentDefine what it means for a set of vectors in a vector space $V$ to be linearly dependent. Prove from the definition that any set of $n+1$ vectors in $\mathbb{R}^{n}$ is linearly dependent.

Using this or otherwise, prove that if $V$ has a finite basis consisting of $n$ elements, then any basis of $V$ has exactly $n$ elements.

Let $V$ be the vector space of bounded continuous functions on $\mathbb{R}$. Show that $V$ is infinite dimensional.

Paper 3, Section II, E

commentLet $V$ and $W$ be finite dimensional real vector spaces and let $T: V \rightarrow W$ be a linear map. Define the dual space $V^{*}$ and the dual map $T^{*}$. Show that there is an isomorphism $\iota: V \rightarrow\left(V^{*}\right)^{*}$ which is canonical, in the sense that $\iota \circ S=\left(S^{*}\right)^{*} \circ \iota$ for any automorphism $S$ of $V$.

Now let $W$ be an inner product space. Use the inner product to show that there is an injective map from im $T$ to $\operatorname{im} T^{*}$. Deduce that the row rank of a matrix is equal to its column rank.

Paper 4, Section I, E

commentWhat is a quadratic form on a finite dimensional real vector space $V$ ? What does it mean for two quadratic forms to be isomorphic (i.e. congruent)? State Sylvester's law of inertia and explain the definition of the quantities which appear in it. Find the signature of the quadratic form on $\mathbb{R}^{3}$ given by $q(\mathbf{v})=\mathbf{v}^{T} A \mathbf{v}$, where

$A=\left(\begin{array}{ccc} -2 & 1 & 6 \\ 1 & -1 & -3 \\ 6 & -3 & 1 \end{array}\right)$

Paper 4, Section II, E

commentWhat does it mean for an $n \times n$ matrix to be in Jordan form? Show that if $A \in M_{n \times n}(\mathbb{C})$ is in Jordan form, there is a sequence $\left(A_{m}\right)$ of diagonalizable $n \times n$ matrices which converges to $A$, in the sense that the $(i j)$ th component of $A_{m}$ converges to the $(i j)$ th component of $A$ for all $i$ and $j$. [Hint: A matrix with distinct eigenvalues is diagonalizable.] Deduce that the same statement holds for all $A \in M_{n \times n}(\mathbb{C})$.

Let $V=M_{2 \times 2}(\mathbb{C})$. Given $A \in V$, define a linear map $T_{A}: V \rightarrow V$ by $T_{A}(B)=A B+B A$. Express the characteristic polynomial of $T_{A}$ in terms of the trace and determinant of $A$. [Hint: First consider the case where $A$ is diagonalizable.]

Paper 1, Section II, 20H

commentA Markov chain has state space $\{a, b, c\}$ and transition matrix

$P=\left(\begin{array}{ccc} 0 & 3 / 5 & 2 / 5 \\ 3 / 4 & 0 & 1 / 4 \\ 2 / 3 & 1 / 3 & 0 \end{array}\right)$

where the rows $1,2,3$ correspond to $a, b, c$, respectively. Show that this Markov chain is equivalent to a random walk on some graph with 6 edges.

Let $k(i, j)$ denote the mean first passage time from $i$ to $j$.

(i) Find $k(a, a)$ and $k(a, b)$.

(ii) Given $X_{0}=a$, find the expected number of steps until the walk first completes a step from $b$ to $c$.

(iii) Suppose the distribution of $X_{0}$ is $\left(\pi_{1}, \pi_{2}, \pi_{3}\right)=(5,4,3) / 12$. Let $\tau(a, b)$ be the least $m$ such that $\{a, b\}$ appears as a subsequence of $\left\{X_{0}, X_{1}, \ldots, X_{m}\right\}$. By comparing the distributions of $\left\{X_{0}, X_{1}, \ldots, X_{m}\right\}$ and $\left\{X_{m}, \ldots, X_{1}, X_{0}\right\}$ show that $E \tau(a, b)=E \tau(b, a)$ and that

$k(b, a)-k(a, b)=\sum_{i \in\{a, b, c\}} \pi_{i}[k(i, a)-k(i, b)]$

Paper 2, Section II, H

comment(i) Suppose $\left(X_{n}\right)_{n \geqslant 0}$ is an irreducible Markov chain and $f_{i j}=P\left(X_{n}=j\right.$ for some $\left.n \geqslant 1 \mid X_{0}=i\right)$. Prove that $f_{i i} \geqslant f_{i j} f_{j i}$ and that

$\sum_{n=0}^{\infty} P_{i}\left(X_{n}=i\right)=\sum_{n=1}^{\infty} f_{i i}^{n-1}$

(ii) Let $\left(X_{n}\right)_{n \geqslant 0}$ be a symmetric random walk on the $\mathbb{Z}^{2}$ lattice. Prove that $\left(X_{n}\right)_{n \geqslant 0}$ is recurrent. You may assume, for $n \geqslant 1$,

$1 / 2<2^{-2 n} \sqrt{n}\left(\begin{array}{c} 2 n \\ n \end{array}\right)<1$

(iii) A princess and monster perform independent random walks on the $\mathbb{Z}^{2}$ lattice. The trajectory of the princess is the symmetric random walk $\left(X_{n}\right)_{n \geqslant 0}$. The monster's trajectory, denoted $\left(Z_{n}\right)_{n \geqslant 0}$, is a sleepy version of an independent symmetric random walk $\left(Y_{n}\right)_{n \geqslant 0}$. Specifically, given an infinite sequence of integers $0=n_{0}<n_{1}<\cdots$, the monster sleeps between these times, so $Z_{n_{i}+1}=\cdots=Z_{n_{i+1}}=Y_{i+1}$. Initially, $X_{0}=(100,0)$ and $Z_{0}=Y_{0}=(0,100)$. The princess is captured if and only if at some future time she and the monster are simultaneously at $(0,0)$.

Compare the capture probabilities for an active monster, who takes $n_{i+1}=n_{i}+1$ for all $i$, and a sleepy monster, who takes $n_{i}$ spaced sufficiently widely so that

$P\left(X_{k}=(0,0) \text { for some } k \in\left\{n_{i}+1, \ldots, n_{i+1}\right\}\right)>1 / 2$

Paper 3, Section I, H

commentProve that if a distribution $\pi$ is in detailed balance with a transition matrix $P$ then it is an invariant distribution for $P$.

Consider the following model with 2 urns. At each time, $t=0,1, \ldots$ one of the following happens:

with probability $\beta$ a ball is chosen at random and moved to the other urn (but nothing happens if both urns are empty);

with probability $\gamma$ a ball is chosen at random and removed (but nothing happens if both urns are empty);

with probability $\alpha$ a new ball is added to a randomly chosen urn,

where $\alpha+\beta+\gamma=1$ and $\alpha<\gamma$. State $(i, j)$ denotes that urns 1,2 contain $i$ and $j$ balls respectively. Prove that there is an invariant measure

$\lambda_{i, j}=\frac{(i+j) !}{i ! j !}(\alpha / 2 \gamma)^{i+j}$

Find the proportion of time for which there are $n$ balls in the system.

Paper 4, Section I, H

commentSuppose $P$ is the transition matrix of an irreducible recurrent Markov chain with state space $I$. Show that if $x$ is an invariant measure and $x_{k}>0$ for some $k \in I$, then $x_{j}>0$ for all $j \in I$.

Let

$\gamma_{j}^{k}=p_{k j}+\sum_{t=1}^{\infty} \sum_{i_{1} \neq k, \ldots, i_{t} \neq k} p_{k i_{t}} p_{i_{t} i_{t-1}} \cdots p_{i_{1} j}$

Give a meaning to $\gamma_{j}^{k}$ and explain why $\gamma_{k}^{k}=1$.

Suppose $x$ is an invariant measure with $x_{k}=1$. Prove that $x_{j} \geqslant \gamma_{j}^{k}$ for all $j$.

Paper 1, Section II, B

comment(i) Let $f(x)=x, 0<x \leqslant \pi$. Obtain the Fourier sine series and sketch the odd and even periodic extensions of $f(x)$ over the interval $-2 \pi \leqslant x \leqslant 2 \pi$. Deduce that

$\sum_{n=1}^{\infty} \frac{1}{n^{2}}=\frac{\pi^{2}}{6}$

(ii) Consider the eigenvalue problem

$\mathcal{L} y=-\frac{d^{2} y}{d x^{2}}-2 \frac{d y}{d x}=\lambda y, \quad \lambda \in \mathbb{R}$

with boundary conditions $y(0)=y(\pi)=0$. Find the eigenvalues and corresponding eigenfunctions. Recast $\mathcal{L}$ in Sturm-Liouville form and give the orthogonality condition for the eigenfunctions. Using the Fourier sine series obtained in part (i), or otherwise, and assuming completeness of the eigenfunctions, find a series for $y$ that satisfies

$\mathcal{L} y=x e^{-x}$

for the given boundary conditions.

Paper 2, Section I, B

commentConsider the equation

$x u_{x}+(x+y) u_{y}=1$

subject to the Cauchy data $u(1, y)=y$. Using the method of characteristics, obtain a solution to this equation.

Paper 2, Section II, B

commentThe steady-state temperature distribution $u(x)$ in a uniform rod of finite length satisfies the boundary value problem

$\begin{gathered} -D \frac{d^{2}}{d x^{2}} u(x)=f(x), \quad 0<x<l \\ u(0)=0, \quad u(l)=0 \end{gathered}$

where $D>0$ is the (constant) diffusion coefficient. Determine the Green's function $G(x, \xi)$ for this problem. Now replace the above homogeneous boundary conditions with the inhomogeneous boundary conditions $u(0)=\alpha, \quad u(l)=\beta$ and give a solution to the new boundary value problem. Hence, obtain the steady-state solution for the following problem with the specified boundary conditions:

$\begin{aligned} &-D \frac{\partial^{2}}{\partial x^{2}} u(x, t)+\frac{\partial}{\partial t} u(x, t)=x, \quad 0<x<1 \\ &u(0, t)=1 / D, \quad u(1, t)=2 / D, \quad t>0 \end{aligned}$

[You may assume that a steady-state solution exists.]

Paper 3, Section I, C

commentThe solution to the Dirichlet problem on the half-space $D=\{\mathbf{x}=(x, y, z): z>0\}$ :

$\nabla^{2} u(\mathbf{x})=0, \quad \mathbf{x} \in D, \quad u(\mathbf{x}) \rightarrow 0 \quad \text { as } \quad|\mathbf{x}| \rightarrow \infty, \quad u(x, y, 0)=h(x, y)$

is given by the formula

$u\left(\mathbf{x}_{0}\right)=u\left(x_{0}, y_{0}, z_{0}\right)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} h(x, y) \frac{\partial}{\partial n} G\left(\mathbf{x}, \mathbf{x}_{0}\right) d x d y$

where $n$ is the outward normal to $\partial D$.

State the boundary conditions on $G$ and explain how $G$ is related to $G_{3}$, where

$G_{3}\left(\mathbf{x}, \mathbf{x}_{0}\right)=-\frac{1}{4 \pi} \frac{1}{\left|\mathbf{x}-\mathbf{x}_{0}\right|}$

is the fundamental solution to the Laplace equation in three dimensions.

Using the method of images find an explicit expression for the function $\frac{\partial}{\partial n} G\left(\mathbf{x}, \mathbf{x}_{0}\right)$ in the formula.

Paper 3, Section II, C

commentThe Laplace equation in plane polar coordinates has the form

$\nabla^{2} \phi=\left[\frac{1}{r} \frac{\partial}{\partial r}\left(r \frac{\partial}{\partial r}\right)+\frac{1}{r^{2}} \frac{\partial^{2}}{\partial \theta^{2}}\right] \phi(r, \theta)=0 .$

Using separation of variables, derive the general solution to the equation that is singlevalued in the domain $1<r<2$.

For

$f(\theta)=\sum_{n=1}^{\infty} A_{n} \sin n \theta$

solve the Laplace equation in the annulus with the boundary conditions:

$\nabla^{2} \phi=0, \quad 1<r<2, \quad \phi(r, \theta)= \begin{cases}f(\theta), & r=1 \\ f(\theta)+1, & r=2\end{cases}$

Paper 4, Section I, C

commentShow that the general solution of the wave equation

$\frac{1}{c^{2}} \frac{\partial^{2} y}{\partial t^{2}}-\frac{\partial^{2} y}{\partial x^{2}}=0$

can be written in the form

$y(x, t)=f(c t-x)+g(c t+x) .$

For the boundary conditions

$y(0, t)=y(L, t)=0, \quad t>0,$

find the relation between $f$ and $g$ and show that they are $2 L$-periodic. Hence show that

$E(t)=\frac{1}{2} \int_{0}^{L}\left(\frac{1}{c^{2}}\left(\frac{\partial y}{\partial t}\right)^{2}+\left(\frac{\partial y}{\partial x}\right)^{2}\right) d x$

is independent of $t$.

Paper 4, Section II, C

commentFind the inverse Fourier transform $G(x)$ of the function

$g(k)=e^{-a|k|}, \quad a>0, \quad-\infty<k<\infty .$

Assuming that appropriate Fourier transforms exist, determine the solution $\psi(x, y)$ of

$\nabla^{2} \psi=0, \quad-\infty<x<\infty, \quad 0<y<1$

with the following boundary conditions

$\psi(x, 0)=\delta(x), \quad \psi(x, 1)=\frac{1}{\pi} \frac{1}{x^{2}+1}$

Here $\delta(x)$ is the Dirac delta-function.

Paper 1, Section II, G

commentConsider the sphere $S^{2}=\left\{(x, y, z) \in \mathbb{R}^{3} \mid x^{2}+y^{2}+z^{2}=1\right\}$, a subset of $\mathbb{R}^{3}$, as a subspace of $\mathbb{R}^{3}$ with the Euclidean metric.

(i) Show that $S^{2}$ is compact and Hausdorff as a topological space.

(ii) Let $X=S^{2} / \sim$ be the quotient set with respect to the equivalence relation identifying the antipodes, i.e.

$(x, y, z) \sim\left(x^{\prime}, y^{\prime}, z^{\prime}\right) \Longleftrightarrow\left(x^{\prime}, y^{\prime}, z^{\prime}\right)=(x, y, z) \text { or }(-x,-y,-z)$

Show that $X$ is compact and Hausdorff with respect to the quotient topology.

Paper 2, Section I, G

commentLet $X$ be a topological space. Prove or disprove the following statements.

(i) If $X$ is discrete, then $X$ is compact if and only if it is a finite set.

(ii) If $Y$ is a subspace of $X$ and $X, Y$ are both compact, then $Y$ is closed in $X$.

Paper 3, Section I, G

commentLet $X$ be a metric space with the metric $d: X \times X \rightarrow \mathbb{R}$.

(i) Show that if $X$ is compact as a topological space, then $X$ is complete.

(ii) Show that the completeness of $X$ is not a topological property, i.e. give an example of two metrics $d, d^{\prime}$ on a set $X$, such that the associated topologies are the same, but $(X, d)$ is complete and $\left(X, d^{\prime}\right)$ is not.

Paper 4, Section II, G

commentLet $X$ be a topological space. A connected component of $X$ means an equivalence class with respect to the equivalence relation on $X$ defined as:

$x \sim y \Longleftrightarrow x, y \text { belong to some connected subspace of } X .$

(i) Show that every connected component is a connected and closed subset of $X$.

(ii) If $X, Y$ are topological spaces and $X \times Y$ is the product space, show that every connected component of $X \times Y$ is a direct product of connected components of $X$ and $Y$.

Paper 1, Section I, C

commentDetermine the nodes $x_{1}, x_{2}$ of the two-point Gaussian quadrature

$\int_{0}^{1} f(x) w(x) d x \approx a_{1} f\left(x_{1}\right)+a_{2} f\left(x_{2}\right), \quad w(x)=x$

and express the coefficients $a_{1}, a_{2}$ in terms of $x_{1}, x_{2}$. [You don't need to find numerical values of the coefficients.]

Paper 1, Section II, C

Define the QR factorization of an $m \times n$ matrix $A$ and explain how it can be used to solve the least squares problem of finding the vector $x^{*} \in \mathbb{R}^{n}$ which minimises $\|A x-b\|$, where $b \in \mathbb{R}^{m}, m>n$, and the norm is the Euclidean one.

Define a Givens rotation $\Omega^{[p, q]}$ and show that it is an orthogonal matrix.

Using a Givens rotation, solve the least squares problem for

$A=\left[\begin{array}{lll} 2 & 1 & 1 \\ 0 & 4 & 1 \\ 0 & 3 & 2 \\ 0 & 0 & 0 \end{array}\right], \quad b=\left[\begin{array}{l} 2 \\ 3 \\ 1 \\ 2 \end{array}\right]$

giving both