• # Paper 1, Section II, F

Define what it means for two norms on a real vector space $V$ to be Lipschitz equivalent. Show that if two norms on $V$ are Lipschitz equivalent and $F \subset V$, then $F$ is closed in one norm if and only if $F$ is closed in the other norm.

Show that if $V$ is finite-dimensional, then any two norms on $V$ are Lipschitz equivalent.

Show that $\|f\|_{1}=\int_{0}^{1}|f(x)| d x$ is a norm on the space $C[0,1]$ of continuous realvalued functions on $[0,1]$. Is the set $S=\{f \in C[0,1]: f(1 / 2)=0\}$ closed in the norm $\|\cdot\| 1$ ?

Determine whether or not the norm $\|\cdot\|_{1}$ is Lipschitz equivalent to the uniform $\operatorname{norm}\|\cdot\|_{\infty}$ on $C[0,1]$.

[You may assume the Bolzano-Weierstrass theorem for sequences in $\mathbb{R}^{n}$.]

comment
• # Paper 2, Section I, F

Define what is meant by a uniformly continuous function on a set $E \subset \mathbb{R}$.

If $f$ and $g$ are uniformly continuous functions on $\mathbb{R}$, is the (pointwise) product $f g$ necessarily uniformly continuous on $\mathbb{R}$ ?

Is a uniformly continuous function on $(0,1)$ necessarily bounded?

Is $\cos (1 / x)$ uniformly continuous on $(0,1) ?$

comment
• # Paper 2, Section II, $12 \mathrm{~F}$

Let $X, Y$ be subsets of $\mathbb{R}^{n}$ and define $X+Y=\{x+y: x \in X, y \in Y\}$. For each of the following statements give a proof or a counterexample (with justification) as appropriate.

(i) If each of $X, Y$ is bounded and closed, then $X+Y$ is bounded and closed.

(ii) If $X$ is bounded and closed and $Y$ is closed, then $X+Y$ is closed.

(iii) If $X, Y$ are both closed, then $X+Y$ is closed.

(iv) If $X$ is open and $Y$ is closed, then $X+Y$ is open.

[The Bolzano-Weierstrass theorem in $\mathbb{R}^{n}$ may be assumed without proof.]

comment
• # Paper 3, Section I, F

Let $U \subset \mathbb{R}^{n}$ be an open set and let $f: U \rightarrow \mathbb{R}$ be a differentiable function on $U$ such that $\left\|\left.D f\right|_{x}\right\| \leqslant M$ for some constant $M$ and all $x \in U$, where $\left\|\left.D f\right|_{x}\right\|$ denotes the operator norm of the linear map $\left.D f\right|_{x}$. Let $[a, b]=\{t a+(1-t) b: 0 \leqslant t \leqslant 1\}\left(a, b, \in \mathbb{R}^{n}\right)$ be a straight-line segment contained in $U$. Prove that $|f(b)-f(a)| \leqslant M\|b-a\|$, where $\|\cdot\|$ denotes the Euclidean norm on $\mathbb{R}^{n}$.

Prove that if $U$ is an open ball and $\left.D f\right|_{x}=0$ for each $x \in U$, then $f$ is constant on $U$.

comment
• # Paper 3, Section II, F

Let $f_{n}, n=1,2, \ldots$, be continuous functions on an open interval $(a, b)$. Prove that if the sequence $\left(f_{n}\right)$ converges to $f$ uniformly on $(a, b)$ then the function $f$ is continuous on $(a, b)$.

If instead $\left(f_{n}\right)$ is only known to converge pointwise to $f$ and $f$ is continuous, must $\left(f_{n}\right)$ be uniformly convergent? Justify your answer.

Suppose that a function $f$ has a continuous derivative on $(a, b)$ and let

$g_{n}(x)=n\left(f\left(x+\frac{1}{n}\right)-f(x)\right)$

Stating clearly any standard results that you require, show that the functions $g_{n}$ converge uniformly to $f^{\prime}$ on each interval $[\alpha, \beta] \subset(a, b)$.

comment
• # Paper 4, Section I, F

Define a contraction mapping and state the contraction mapping theorem.

Let $C[0,1]$ be the space of continuous real-valued functions on $[0,1]$ endowed with the uniform norm. Show that the map $A: C[0,1] \rightarrow C[0,1]$ defined by

$A f(x)=\int_{0}^{x} f(t) d t$

is not a contraction mapping, but that $A \circ A$ is.

comment
• # Paper 4, Section II, F

Let $U \subset \mathbb{R}^{2}$ be an open set. Define what it means for a function $f: U \rightarrow \mathbb{R}$ to be differentiable at a point $\left(x_{0}, y_{0}\right) \in U$.

Prove that if the partial derivatives $D_{1} f$ and $D_{2} f$ exist on $U$ and are continuous at $\left(x_{0}, y_{0}\right)$, then $f$ is differentiable at $\left(x_{0}, y_{0}\right)$.

If $f$ is differentiable on $U$ must $D_{1} f, D_{2} f$ be continuous at $\left(x_{0}, y_{0}\right) ?$ Give a proof or counterexample as appropriate.

The function $h: \mathbb{R}^{2} \rightarrow \mathbb{R}$ is defined by

$h(x, y)=x y \sin (1 / x) \quad \text { for } x \neq 0, \quad h(0, y)=0$

Determine all the points $(x, y)$ at which $h$ is differentiable.

comment

• # Paper 3, Section II, G

State the Residue Theorem precisely.

Let $D$ be a star-domain, and let $\gamma$ be a closed path in $D$. Suppose that $f$ is a holomorphic function on $D$, having no zeros on $\gamma$. Let $N$ be the number of zeros of $f$ inside $\gamma$, counted with multiplicity (i.e. order of zero and winding number). Show that

$N=\frac{1}{2 \pi i} \int_{\gamma} \frac{f^{\prime}(z)}{f(z)} d z$

[The Residue Theorem may be used without proof.]

Now suppose that $g$ is another holomorphic function on $D$, also having no zeros on $\gamma$ and with $|g(z)|<|f(z)|$ on $\gamma$. Explain why, for any $0 \leqslant t \leqslant 1$, the expression

$I(t)=\int_{\gamma} \frac{f^{\prime}(z)+\operatorname{tg}^{\prime}(z)}{f(z)+\operatorname{tg}(z)} d z$

is well-defined. By considering the behaviour of the function $I(t)$ as $t$ varies, deduce Rouché's Theorem.

For each $n$, let $p_{n}$ be the polynomial $\sum_{k=0}^{n} \frac{z^{k}}{k !}$. Show that, as $n$ tends to infinity, the smallest modulus of the roots of $p_{n}$ also tends to infinity.

[You may assume any results on convergence of power series, provided that they are stated clearly.]

comment
• # Paper 4, Section I, G

Let $f$ be an entire function. State Cauchy's Integral Formula, relating the $n$th derivative of $f$ at a point $z$ with the values of $f$ on a circle around $z$.

State Liouville's Theorem, and deduce it from Cauchy's Integral Formula.

Let $f$ be an entire function, and suppose that for some $k$ we have that $|f(z)| \leqslant|z|^{k}$ for all $z$. Prove that $f$ is a polynomial.

comment

• # Paper 1, Section I, B

Let $f(z)$ be an analytic/holomorphic function defined on an open set $D$, and let $z_{0} \in D$ be a point such that $f^{\prime}\left(z_{0}\right) \neq 0$. Show that the transformation $w=f(z)$ preserves the angle between smooth curves intersecting at $z_{0}$. Find such a transformation $w=f(z)$ that maps the second quadrant of the unit disc (i.e. $|z|<1, \pi / 2<\arg (z)<\pi)$ to the region in the first quadrant of the complex plane where $|w|>1$ (i.e. the region in the first quadrant outside the unit circle).

comment
• # Paper 1, Section II, B

By choice of a suitable contour show that for $a>b>0$

$\int_{0}^{2 \pi} \frac{\sin ^{2} \theta d \theta}{a+b \cos \theta}=\frac{2 \pi}{b^{2}}\left[a-\sqrt{a^{2}-b^{2}}\right]$

Hence evaluate

$\int_{0}^{1} \frac{\left(1-x^{2}\right)^{1 / 2} x^{2} d x}{1+x^{2}}$

using the substitution $x=\cos (\theta / 2)$.

comment
• # Paper 2, Section II, B

By considering a rectangular contour, show that for $0 we have

$\int_{-\infty}^{\infty} \frac{e^{a x}}{e^{x}+1} d x=\frac{\pi}{\sin \pi a}$

Hence evaluate

$\int_{0}^{\infty} \frac{d t}{t^{5 / 6}(1+t)}$

comment

• # Paper 3, Section I, B

Find the most general cubic form

$u(x, y)=a x^{3}+b x^{2} y+c x y^{2}+d y^{3}$

which satisfies Laplace's equation, where $a, b, c$ and $d$ are all real. Hence find an analytic function $f(z)=f(x+i y)$ which has such a $u$ as its real part.

comment
• # Paper 4, Section II, B

Find the Laplace transforms of $t^{n}$ for $n$ a positive integer and $H(t-a)$ where $a>0$ and $H(t)$ is the Heaviside step function.

Consider a semi-infinite string which is initially at rest and is fixed at one end. The string can support wave-like motions, and for $t>0$ it is allowed to fall under gravity. Therefore the deflection $y(x, t)$ from its initial location satisfies

$\frac{\partial^{2}}{\partial t^{2}} y=c^{2} \frac{\partial^{2}}{\partial x^{2}} y+g \quad \text { for } \quad x>0, t>0$

with

$y(0, t)=y(x, 0)=\frac{\partial}{\partial t} y(x, 0)=0 \quad \text { and } \quad y(x, t) \rightarrow \frac{g t^{2}}{2} \text { as } x \rightarrow \infty$

where $g$ is a constant. Use Laplace transforms to find $y(x, t)$.

[The convolution theorem for Laplace transforms may be quoted without proof.]

comment

• # Paper 1, Section II, A

The region $z<0$ is occupied by an ideal earthed conductor and a point charge $q$ with mass $m$ is held above it at $(0,0, d)$.

(i) What are the boundary conditions satisfied by the electric field $\mathbf{E}$ on the surface of the conductor?

(ii) Consider now a system without the conductor mentioned above. A point charge $q$ with mass $m$ is held at $(0,0, d)$, and one of charge $-q$ is held at $(0,0,-d)$. Show that the boundary condition on $\mathbf{E}$ at $z=0$ is identical to the answer to (i). Explain why this represents the electric field due to the charge at $(0,0, d)$ under the influence of the conducting boundary.

(iii) The original point charge in (i) is released with zero initial velocity. Find the time taken for the point charge to reach the plane (ignoring gravity).

[You may assume that the force on the point charge is equal to $m d^{2} \mathbf{x} / d t^{2}$, where $\mathbf{x}$ is the position vector of the charge, and $t$ is time.]

comment
• # Paper 2, Section I, A

Starting from Maxwell's equations, deduce that

$\frac{d \Phi}{d t}=-\mathcal{E}$

for a moving circuit $C$, where $\Phi$ is the flux of $\mathbf{B}$ through the circuit and where the electromotive force $\mathcal{E}$ is defined to be

$\mathcal{E}=\oint_{\mathcal{C}}(\mathbf{E}+\mathbf{v} \times \mathbf{B}) \cdot \mathbf{d} \mathbf{r}$

where $\mathbf{v}=\mathbf{v}(\mathbf{r})$ denotes the velocity of a point $\mathbf{r}$ on $C$.

[Hint: Consider the closed surface consisting of the surface $S(t)$ bounded by $C(t)$, the surface $S(t+\delta t)$ bounded by $C(t+\delta t)$ and the surface $S^{\prime}$ stretching from $C(t)$ to $C(t+\delta t)$. Show that the flux of $\mathbf{B}$ through $S^{\prime}$ is $-\delta t \oint_{C} \mathbf{B} \cdot(\mathbf{v} \times \mathbf{d r})$.]

comment
• # Paper 2, Section II, A

What is the relationship between the electric field $\mathbf{E}$ and the charge per unit area $\sigma$ on the surface of a perfect conductor?

Consider a charge distribution $\rho(\mathbf{r})$ distributed with potential $\phi(\mathbf{r})$ over a finite volume $V$ within which there is a set of perfect conductors with charges $Q_{i}$, each at a potential $\phi_{i}$ (normalised such that the potential at infinity is zero). Using Maxwell's equations and the divergence theorem, derive a relationship between the electrostatic energy $W$ and a volume integral of an explicit function of the electric field $\mathbf{E}$, where

$W=\frac{1}{2} \int_{V} \rho \phi d \tau+\frac{1}{2} \sum_{i} Q_{i} \phi_{i}$

Consider $N$ concentric perfectly conducting spherical shells. Shell $n$ has radius $r_{n}$ (where $r_{n}>r_{n-1}$ ) and charge $q$ for $n=1$, and charge $2(-1)^{(n+1)} q$ for $n>1$. Show that

$W \propto \frac{1}{r_{1}},$

and determine the constant of proportionality.

comment
• # Paper 3, Section II, A

(i) Consider charges $-q$ at $\pm \mathbf{d}$ and $2 q$ at $(0,0,0)$. Write down the electric potential.

(ii) Take $\mathbf{d}=(0,0, d)$. A quadrupole is defined in the limit that $q \rightarrow \infty, d \rightarrow 0$ such that $q d^{2}$ tends to a constant $p$. Find the quadrupole's potential, showing that it is of the form

$\phi(\mathbf{r})=A \frac{\left(r^{2}+C z^{D}\right)}{r^{B}}$

where $r=|\mathbf{r}|$. Determine the constants $A, B, C$ and $D$.

(iii) The quadrupole is fixed at the origin. At time $t=0$ a particle of charge $-Q(Q$ has the same sign as $q)$ and mass $m$ is at $(1,0,0)$ travelling with velocity $d \mathbf{r} / d t=(-\kappa, 0,0)$, where

$\kappa=\sqrt{\frac{Q p}{2 \pi \epsilon_{0} m}} .$

Neglecting gravity, find the time taken for the particle to reach the quadrupole in terms of $\kappa$, given that the force on the particle is equal to $m d^{2} \mathbf{r} / d t^{2}$.

comment
• # Paper 4, Section I, A

A continuous wire of resistance $R$ is wound around a very long right circular cylinder of radius $a$, and length $l$ (long enough so that end effects can be ignored). There are $N \gg 1$ turns of wire per unit length, wound in a spiral of very small pitch. Initially, the magnetic field $\mathbf{B}$ is $\mathbf{0}$.

Both ends of the coil are attached to a battery of electromotance $\mathcal{E}_{0}$ at $t=0$, which induces a current $I(t)$. Use Ampère's law to derive $\mathbf{B}$ inside and outside the cylinder when the displacement current may be neglected. Write the self-inductance of the coil $L$ in terms of the quantities given above. Using Ohm's law and Faraday's law of induction, find $I(t)$ explicitly in terms of $\mathcal{E}_{0}, R, L$ and $t$.

comment

• # Paper 1, Section I, B

Constant density viscous fluid with dynamic viscosity $\mu$ flows in a two-dimensional horizontal channel of depth $h$. There is a constant pressure gradient $G>0$ in the horizontal $x$-direction. The upper horizontal boundary at $y=h$ is driven at constant horizontal speed $U>0$, with the lower boundary being held at rest. Show that the steady fluid velocity $u$ in the $x$-direction is

$u=\frac{-G}{2 \mu} y(h-y)+\frac{U y}{h}$

Show that it is possible to have $d u / d y<0$ at some point in the flow for sufficiently large pressure gradient. Derive a relationship between $G$ and $U$ so that there is no net volume flux along the channel. For the flow with no net volume flux, sketch the velocity profile.

comment
• # Paper 1, Section II, B

Consider the purely two-dimensional steady flow of an inviscid incompressible constant density fluid in the absence of body forces. For velocity $\mathbf{u}$, the vorticity is $\boldsymbol{\nabla} \times \mathbf{u}=\boldsymbol{\omega}=(0,0, \omega)$. Show that

$\mathbf{u} \times \boldsymbol{\omega}=\boldsymbol{\nabla}\left[\frac{p}{\rho}+\frac{1}{2}|\mathbf{u}|^{2}\right]$

where $p$ is the pressure and $\rho$ is the fluid density. Hence show that, if $\omega$ is a constant in both space and time,

$\frac{1}{2}|\mathbf{u}|^{2}+\omega \psi+\frac{p}{\rho}=C,$

where $C$ is a constant and $\psi$ is the streamfunction. Here, $\psi$ is defined by $\mathbf{u}=\boldsymbol{\nabla} \times \boldsymbol{\Psi}$, where $\boldsymbol{\Psi}=(0,0, \psi)$.

Fluid in the annular region $a has constant (in both space and time) vorticity $\omega$. The streamlines are concentric circles, with the fluid speed zero on $r=2 a$ and $V>0$ on $r=a$. Calculate the velocity field, and hence show that

$\omega=\frac{-2 V}{3 a}$

Deduce that the pressure difference between the outer and inner edges of the annular region is

$\Delta p=\left(\frac{15-16 \ln 2}{18}\right) \rho V^{2}$

[Hint: Note that in cylindrical polar coordinates $(r, \phi, z)$, the curl of a vector field $\mathbf{A}(r, \phi)=[a(r, \phi), b(r, \phi), c(r, \phi)]$ is

$\boldsymbol{\nabla} \times \mathbf{A}=\left[\frac{1}{r} \frac{\partial c}{\partial \phi},-\frac{\partial c}{\partial r}, \frac{1}{r}\left(\frac{\partial(r b)}{\partial r}-\frac{\partial a}{\partial \phi}\right)\right]$

comment
• # Paper 2, Section I, B

Consider the steady two-dimensional fluid velocity field

$\mathbf{u}=\left(\begin{array}{l} u \\ v \end{array}\right)=\left(\begin{array}{ll} \epsilon & -\gamma \\ \gamma & -\epsilon \end{array}\right)\left(\begin{array}{l} x \\ y \end{array}\right)$

where $\epsilon \geqslant 0$ and $\gamma \geqslant 0$. Show that the fluid is incompressible. The streamfunction $\psi$ is defined by $\mathbf{u}=\boldsymbol{\nabla} \times \boldsymbol{\Psi}$, where $\boldsymbol{\Psi}=(0,0, \psi)$. Show that $\psi$ is given by

$\psi=\epsilon x y-\frac{\gamma}{2}\left(x^{2}+y^{2}\right)$

Hence show that the streamlines are defined by

$(\epsilon-\gamma)(x+y)^{2}-(\epsilon+\gamma)(x-y)^{2}=C$

for $C$ a constant. For each of the three cases below, sketch the streamlines and briefly describe the flow. (i) $\epsilon=1, \gamma=0$, (ii) $\epsilon=0, \gamma=1$, (iii) $\epsilon=1, \gamma=1$.

comment
• # Paper 3, Section II, B

A bubble of gas occupies the spherical region $r \leqslant R(t)$, and an incompressible irrotational liquid of constant density $\rho$ occupies the outer region $r \geqslant R$, such that as $r \rightarrow \infty$ the liquid is at rest with constant pressure $p_{\infty}$. Briefly explain why it is appropriate to use a velocity potential $\phi(r, t)$ to describe the liquid velocity u.

By applying continuity of velocity across the gas-liquid interface, show that the liquid pressure (for $r \geqslant R$ ) satisfies

$\frac{p}{\rho}+\frac{1}{2}\left(\frac{R^{2} \dot{R}}{r^{2}}\right)^{2}-\frac{1}{r} \frac{d}{d t}\left(R^{2} \dot{R}\right)=\frac{p_{\infty}}{\rho}, \quad \text { where } \dot{R}=\frac{d R}{d t} .$

Show that the excess pressure $p_{s}-p_{\infty}$ at the bubble surface $r=R$ is

$p_{s}-p_{\infty}=\frac{\rho}{2}\left(3 \dot{R}^{2}+2 R \ddot{R}\right), \quad \text { where } \ddot{R}=\frac{d^{2} R}{d t^{2}}$

and hence that

$p_{s}-p_{\infty}=\frac{\rho}{2 R^{2}} \frac{d}{d R}\left(R^{3} \dot{R}^{2}\right)$

The pressure $p_{g}(t)$ inside the gas bubble satisfies the equation of state

$p_{g} V^{4 / 3}=C$

where $C$ is a constant, and $V(t)$ is the bubble volume. At time $t=0$ the bubble is at rest with radius $R=a$. If the bubble then expands and comes to rest at $R=2 a$, determine the required gas pressure $p_{0}$ at $t=0$ in terms of $p_{\infty}$.

[You may assume that there is contact between liquid and gas for all time, that all motion is spherically symmetric about the origin $r=0$, and that there is no body force. You may also assume Bernoulli's integral of the equation of motion to determine the liquid pressure

$\frac{p}{\rho}+\frac{\partial \phi}{\partial t}+\frac{1}{2}|\nabla \phi|^{2}=A(t)$

where $\phi(r, t)$ is the velocity potential.]

comment
• # Paper 4, Section II, B

Consider a layer of fluid of constant density $\rho$ and equilibrium depth $h_{0}$ in a rotating frame of reference, rotating at constant angular velocity $\Omega$ about the vertical $z$-axis. The equations of motion are

\begin{aligned} \frac{\partial u}{\partial t}-f v &=-\frac{1}{\rho} \frac{\partial p}{\partial x} \\ \frac{\partial v}{\partial t}+f u &=-\frac{1}{\rho} \frac{\partial p}{\partial y} \\ 0 &=-\frac{\partial p}{\partial z}-\rho g \end{aligned}

where $p$ is the fluid pressure, $u$ and $v$ are the fluid velocities in the $x$-direction and $y$ direction respectively, $f=2 \Omega$, and $g$ is the constant acceleration due to gravity. You may also assume that the horizontal extent of the layer is sufficiently large so that the layer may be considered to be shallow, such that vertical velocities may be neglected.

By considering mass conservation, show that the depth $h(x, y, t)$ of the layer satisfies

$\frac{\partial h}{\partial t}+\frac{\partial}{\partial x}(h u)+\frac{\partial}{\partial y}(h v)=0 .$

Now assume that $h=h_{0}+\eta(x, y, t)$, where $|\eta| \ll h_{0}$. Show that the (linearised) potential vorticity $\mathbf{Q}=Q \hat{\mathbf{z}}$, defined by

$Q=\zeta-\eta \frac{f}{h_{0}}, \text { where } \zeta=\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}$

and $\hat{\mathbf{z}}$ is the unit vector in the vertical $z$-direction, is a constant in time, i.e. $Q=Q_{0}(x, y)$.

When $Q_{0}=0$ everywhere, establish that the surface perturbation $\eta$ satisfies

$\frac{\partial^{2} \eta}{\partial t^{2}}-g h_{0}\left(\frac{\partial^{2} \eta}{\partial x^{2}}+\frac{\partial^{2} \eta}{\partial y^{2}}\right)+f^{2} \eta=0$

and show that this equation has wave-like solutions $\eta=\eta_{0} \cos [k(x-c t)]$ when $c$ and $k$ are related through a dispersion relation to be determined. Show that, to leading order, the trajectories of fluid particles for these waves are ellipses. Assuming that $\eta_{0}>0, k>0$, $c>0$ and $f>0$, sketch the fluid velocity when $k(x-c t)=n \pi / 2$ for $n=0,1,2,3$.

comment

• # Paper 1, Section I, F

Determine the second fundamental form of a surface in $\mathbb{R}^{3}$ defined by the parametrisation

$\sigma(u, v)=((a+b \cos u) \cos v,(a+b \cos u) \sin v, b \sin u)$

for $0, with some fixed $a>b>0$. Show that the Gaussian curvature $K(u, v)$ of this surface takes both positive and negative values.

comment
• # Paper 2, Section II, F

Let $H=\{x+i y: x, y \in \mathbb{R}, y>0\} \subset \mathbb{C}$ be the upper half-plane with a hyperbolic metric $g=\frac{d x^{2}+d y^{2}}{y^{2}}$. Prove that every hyperbolic circle $C$ in $H$ is also a Euclidean circle. Is the centre of $C$ as a hyperbolic circle always the same point as the centre of $C$ as a Euclidean circle? Give a proof or counterexample as appropriate.

Let $A B C$ and $A^{\prime} B^{\prime} C^{\prime}$ be two hyperbolic triangles and denote the hyperbolic lengths of their sides by $a, b, c$ and $a^{\prime}, b^{\prime}, c^{\prime}$, respectively. Show that if $a=a^{\prime}, b=b^{\prime}$ and $c=c^{\prime}$, then there is a hyperbolic isometry taking $A B C$ to $A^{\prime} B^{\prime} C^{\prime}$. Is there always such an isometry if instead the triangles have one angle the same and $a=a^{\prime}, b=b^{\prime} ?$ Justify your answer.

[Standard results on hyperbolic isometries may be assumed, provided they are clearly stated.]

comment
• # Paper 3, Section I, F

Let $f(x)=A x+b$ be an isometry $\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$, where $A$ is an $n \times n$ matrix and $b \in \mathbb{R}^{n}$. What are the possible values of $\operatorname{det} A$ ?

Let $I$ denote the $n \times n$ identity matrix. Show that if $n=2$ and $\operatorname{det} A>0$, but $A \neq I$, then $f$ has a fixed point. Must $f$ have a fixed point if $n=3$ and $\operatorname{det} A>0$, but $A \neq I ?$ Justify your answer.

comment
• # Paper 3, Section II, F

Let $\mathcal{T}$ be a decomposition of the two-dimensional sphere into polygonal domains, with every polygon having at least three edges. Let $V, E$, and $F$ denote the numbers of vertices, edges and faces of $\mathcal{T}$, respectively. State Euler's formula. Prove that $2 E \geqslant 3 F$.

Suppose that at least three edges meet at every vertex of $\mathcal{T}$. Let $F_{n}$ be the number of faces of $\mathcal{T}$ that have exactly $n$ edges $(n \geqslant 3)$ and let $V_{m}$ be the number of vertices at which exactly $m$ edges meet $(m \geqslant 3)$. Is it possible for $\mathcal{T}$ to have $V_{3}=F_{3}=0$ ? Justify your answer.

By expressing $6 F-\sum_{n} n F_{n}$ in terms of the $V_{j}$, or otherwise, show that $\mathcal{T}$ has at least four faces that are triangles, quadrilaterals and/or pentagons.

comment
• # Paper 4, Section II, F

Define an embedded parametrised surface in $\mathbb{R}^{3}$. What is the Riemannian metric induced by a parametrisation? State, in terms of the Riemannian metric, the equations defining a geodesic curve $\gamma:(0,1) \rightarrow S$, assuming that $\gamma$ is parametrised by arc-length.

Let $S$ be a conical surface

$S=\left\{(x, y, z) \in \mathbb{R}^{3}: 3\left(x^{2}+y^{2}\right)=z^{2}, \quad z>0\right\}$

Using an appropriate smooth parametrisation, or otherwise, prove that $S$ is locally isometric to the Euclidean plane. Show that any two points on $S$ can be joined by a geodesic. Is this geodesic always unique (up to a reparametrisation)? Justify your answer.

[The expression for the Euclidean metric in polar coordinates on $\mathbb{R}^{2}$ may be used without proof.]

comment

• # Paper 1, Section II, E

Let $G$ be a finite group and $p$ a prime divisor of the order of $G$. Give the definition of a Sylow $p$-subgroup of $G$, and state Sylow's theorems.

Let $p$ and $q$ be distinct primes. Prove that a group of order $p^{2} q$ is not simple.

Let $G$ be a finite group, $H$ a normal subgroup of $G$ and $P$ a Sylow $p$-subgroup of H. Let $N_{G}(P)$ denote the normaliser of $P$ in $G$. Prove that if $g \in G$ then there exist $k \in N_{G}(P)$ and $h \in H$ such that $g=k h$.

comment
• # Paper 2, Section I, $2 E$

List the conjugacy classes of $A_{6}$ and determine their sizes. Hence prove that $A_{6}$ is simple.

comment
• # Paper 2, Section II, 11E

Prove that every finite integral domain is a field.

Let $F$ be a field and $f$ an irreducible polynomial in the polynomial ring $F[X]$. Prove that $F[X] /(f)$ is a field, where $(f)$ denotes the ideal generated by $f$.

Hence construct a field of 4 elements, and write down its multiplication table.

Construct a field of order 9 .

comment
• # Paper 3, Section I, E

State and prove Hilbert's Basis Theorem.

comment
• # Paper 3, Section II, E

Let $R$ be a ring, $M$ an $R$-module and $S=\left\{m_{1}, \ldots, m_{k}\right\}$ a subset of $M$. Define what it means to say $S$ spans $M$. Define what it means to say $S$ is an independent set.

We say $S$ is a basis for $M$ if $S$ spans $M$ and $S$ is an independent set. Prove that the following two statements are equivalent.

1. $S$ is a basis for $M$.

2. Every element of $M$ is uniquely expressible in the form $r_{1} m_{1}+\cdots+r_{k} m_{k}$ for some $r_{1}, \ldots, r_{k} \in R$.

We say $S$ generates $M$ freely if $S$ spans $M$ and any map $\Phi: S \rightarrow N$, where $N$ is an $R$-module, can be extended to an $R$-module homomorphism $\Theta: M \rightarrow N$. Prove that $S$ generates $M$ freely if and only if $S$ is a basis for $M$.

Let $M$ be an $R$-module. Are the following statements true or false? Give reasons.

(i) If $S$ spans $M$ then $S$ necessarily contains an independent spanning set for $M$.

(ii) If $S$ is an independent subset of $M$ then $S$ can always be extended to a basis for $M$.

comment
• # Paper 4, Section I, E

Let $G$ be the abelian group generated by elements $a, b$ and $c$ subject to the relations: $3 a+6 b+3 c=0,9 b+9 c=0$ and $-3 a+3 b+6 c=0$. Express $G$ as a product of cyclic groups. Hence determine the number of elements of $G$ of order 3 .

comment
• # Paper 4, Section II, E

(a) Consider the four following types of rings: Principal Ideal Domains, Integral Domains, Fields, and Unique Factorisation Domains. Arrange them in the form $A \Longrightarrow$ $B \Longrightarrow C \Longrightarrow D$ (where $A \Longrightarrow B$ means if a ring is of type $A$ then it is of type $B$ )

Prove that these implications hold. [You may assume that irreducibles in a Principal Ideal Domain are prime.] Provide examples, with brief justification, to show that these implications cannot be reversed.

(b) Let $R$ be a ring with ideals $I$ and $J$ satisfying $I \subseteq J$. Define $K$ to be the set $\{r \in R: r J \subseteq I\}$. Prove that $K$ is an ideal of $R$. If $J$ and $K$ are principal, prove that $I$ is principal.

comment

• # Paper 1, Section I, G

State and prove the Steinitz Exchange Lemma. Use it to prove that, in a finitedimensional vector space: any two bases have the same size, and every linearly independent set extends to a basis.

Let $e_{1}, \ldots, e_{n}$ be the standard basis for $\mathbb{R}^{n}$. Is $e_{1}+e_{2}, e_{2}+e_{3}, e_{3}+e_{1}$ a basis for $\mathbb{R}^{3} ?$ Is $e_{1}+e_{2}, e_{2}+e_{3}, e_{3}+e_{4}, e_{4}+e_{1}$ a basis for $\mathbb{R}^{4} ?$ Justify your answers.

comment
• # Paper 1, Section II, G

Let $V$ be an $n$-dimensional real vector space, and let $T$ be an endomorphism of $V$. We say that $T$ acts on a subspace $W$ if $T(W) \subset W$.

(i) For any $x \in V$, show that $T$ acts on the linear span of $\left\{x, T(x), T^{2}(x), \ldots, T^{n-1}(x)\right\}$.

(ii) If $\left\{x, T(x), T^{2}(x), \ldots, T^{n-1}(x)\right\}$ spans $V$, show directly (i.e. without using the CayleyHamilton Theorem) that $T$ satisfies its own characteristic equation.

(iii) Suppose that $T$ acts on a subspace $W$ with $W \neq\{0\}$ and $W \neq V$. Let $e_{1}, \ldots, e_{k}$ be a basis for $W$, and extend to a basis $e_{1}, \ldots, e_{n}$ for $V$. Describe the matrix of $T$ with respect to this basis.

(iv) Using (i), (ii) and (iii) and induction, give a proof of the Cayley-Hamilton Theorem.

[Simple properties of determinants may be assumed without proof.]

comment
• # Paper 2, Section I, G

State and prove the Rank-Nullity Theorem.

Let $\alpha$ be a linear map from $\mathbb{R}^{5}$ to $\mathbb{R}^{3}$. What are the possible dimensions of the kernel of $\alpha$ ? Justify your answer.

comment
• # Paper 2, Section II, G

Define the determinant of an $n \times n$ complex matrix $A$. Explain, with justification, how the determinant of $A$ changes when we perform row and column operations on $A$.

Let $A, B, C$ be complex $n \times n$ matrices. Prove the following statements. (i) $\operatorname{det}\left(\begin{array}{cc}A & C \\ 0 & B\end{array}\right)=\operatorname{det} A \operatorname{det} B$. (ii) $\operatorname{det}\left(\begin{array}{cc}A & -B \\ B & A\end{array}\right)=\operatorname{det}(A+i B) \operatorname{det}(A-i B)$.

comment
• # Paper 3, Section II, G

Let $q$ be a nonsingular quadratic form on a finite-dimensional real vector space $V$. Prove that we may write $V=P \bigoplus N$, where the restriction of $q$ to $P$ is positive definite, the restriction of $q$ to $N$ is negative definite, and $q(x+y)=q(x)+q(y)$ for all $x \in P$ and $y \in N$. [No result on diagonalisability may be assumed.]

Show that the dimensions of $P$ and $N$ are independent of the choice of $P$ and $N$. Give an example to show that $P$ and $N$ are not themselves uniquely defined.

Find such a decomposition $V=P \bigoplus N$ when $V=\mathbb{R}^{3}$ and $q$ is the quadratic form $q((x, y, z))=x^{2}+2 y^{2}-2 x y-2 x z$

comment
• # Paper 4, Section I, G

Let $V$ denote the vector space of all real polynomials of degree at most 2 . Show that

$(f, g)=\int_{-1}^{1} f(x) g(x) d x$

defines an inner product on $V$.

Find an orthonormal basis for $V$.

comment
• # Paper 4, Section II, G

Let $V$ be a real vector space. What is the dual $V^{*}$ of $V ?$ If $e_{1}, \ldots, e_{n}$ is a basis for $V$, define the dual basis $e_{1}^{*}, \ldots, e_{n}^{*}$ for $V^{*}$, and show that it is indeed a basis for $V^{*}$.

[No result about dimensions of dual spaces may be assumed.]

For a subspace $U$ of $V$, what is the annihilator of $U$ ? If $V$ is $n$-dimensional, how does the dimension of the annihilator of $U$ relate to the dimension of $U$ ?

Let $\alpha: V \rightarrow W$ be a linear map between finite-dimensional real vector spaces. What is the dual map $\alpha^{*}$ ? Explain why the rank of $\alpha^{*}$ is equal to the rank of $\alpha$. Prove that the kernel of $\alpha^{*}$ is the annihilator of the image of $\alpha$, and also that the image of $\alpha^{*}$ is the annihilator of the kernel of $\alpha$.

[Results about the matrices representing a map and its dual may be used without proof, provided they are stated clearly.]

Now let $V$ be the vector space of all real polynomials, and define elements $L_{0}, L_{1}, \ldots$ of $V^{*}$ by setting $L_{i}(p)$ to be the coefficient of $X^{i}$ in $p$ (for each $p \in V$ ). Do the $L_{i}$ form a basis for $V^{*}$ ?

comment

• # Paper 1, Section II, 20H

Consider a homogeneous Markov chain $\left(X_{n}: n \geqslant 0\right)$ with state space $S$ and transition $\operatorname{matrix} P=\left(p_{i, j}: i, j \in S\right)$. For a state $i$, define the terms aperiodic, positive recurrent and ergodic.

Let $S=\{0,1,2, \ldots\}$ and suppose that for $i \geqslant 1$ we have $p_{i, i-1}=1$ and

$p_{0,0}=0, p_{0, j}=p q^{j-1}, j=1,2, \ldots,$

where $p=1-q \in(0,1)$. Show that this Markov chain is irreducible.

Let $T_{0}=\inf \left\{n \geqslant 1: X_{n}=0\right\}$ be the first passage time to 0 . Find $\mathbb{P}\left(T_{0}=n \mid X_{0}=0\right)$ and show that state 0 is ergodic.

Find the invariant distribution $\pi$ for this Markov chain. Write down:

(i) the mean recurrence time for state $i, i \geqslant 1$;

(ii) $\lim _{n \rightarrow \infty} \mathbb{P}\left(X_{n} \neq 0 \mid X_{0}=0\right)$.

[Results from the course may be quoted without proof, provided they are clearly stated.]

comment
• # Paper 2, Section II, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $\mathrm{S}$ and transition matrix $P=\left(p_{i, j}: i, j \in S\right)$. For $A \subseteq S$, let

$H^{A}=\inf \left\{n \geqslant 0: X_{n} \in A\right\} \text { and } h_{i}^{A}=\mathbb{P}\left(H^{A}<\infty \mid X_{0}=i\right), i \in S$

Prove that $h^{A}=\left(h_{i}^{A}: i \in S\right)$ is the minimal non-negative solution to the equations

$h_{i}^{A}= \begin{cases}1 & \text { for } i \in A \\ \sum_{j \in S} p_{i, j} h_{j}^{A} & \text { otherwise. }\end{cases}$

Three people $A, B$ and $C$ play a series of two-player games. In the first game, two people play and the third person sits out. Any subsequent game is played between the winner of the previous game and the person sitting out the previous game. The overall winner of the series is the first person to win two consecutive games. The players are evenly matched so that in any game each of the two players has probability $\frac{1}{2}$ of winning the game, independently of all other games. For $n=1,2, \ldots$, let $X_{n}$ be the ordered pair consisting of the winners of games $n$ and $n+1$. Thus the state space is $\{A A, A B, A C, B A, B B, B C, C A, C B, C C\}$, and, for example, $X_{1}=A C$ if $A$ wins the first game and $C$ wins the second.

The first game is between $A$ and $B$. Treating $A A, B B$ and $C C$ as absorbing states, or otherwise, find the probability of winning the series for each of the three players.

comment
• # Paper 3, Section I, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $S$. For $i, j$ in $S$ let $p_{i, j}(n)$ denote the $n$-step transition probability $\mathbb{P}\left(X_{n}=j \mid X_{0}=i\right)$.

(i) Express the $(m+n)$-step transition probability $p_{i, j}(m+n)$ in terms of the $n$-step and $m$-step transition probabilities.

(ii) Write $i \rightarrow j$ if there exists $n \geqslant 0$ such that $p_{i, j}(n)>0$, and $i \leftrightarrow j$ if $i \rightarrow j$ and $j \rightarrow i$. Prove that if $i \leftrightarrow j$ and $i \neq j$ then either both $i$ and $j$ are recurrent or both $i$ and $j$ are transient. [You may assume that a state $i$ is recurrent if and only if $\sum_{n=0}^{\infty} p_{i, i}(n)=\infty$, and otherwise $i$ is transient.]

(iii) A Markov chain has state space $\{0,1,2,3\}$ and transition matrix

$\left(\begin{array}{cccc} \frac{1}{2} & \frac{1}{3} & 0 & \frac{1}{6} \\ 0 & \frac{3}{4} & 0 & \frac{1}{4} \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array}\right)$

For each state $i$, determine whether $i$ is recurrent or transient. [Results from the course may be quoted without proof, provided they are clearly stated.]

comment
• # Paper 4, Section I, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $S$ and transition $\operatorname{matrix} P=\left(p_{i, j}: i, j \in S\right)$.

(a) Let $W_{n}=X_{2 n}, n=0,1,2, \ldots$ Show that $\left(W_{n}: n \geqslant 0\right)$ is a Markov chain and give its transition matrix. If $\lambda_{i}=\mathbb{P}\left(X_{0}=i\right), i \in S$, find $\mathbb{P}\left(W_{1}=0\right)$ in terms of the $\lambda_{i}$ and the $p_{i, j}$.

[Results from the course may be quoted without proof, provided they are clearly stated.]

(b) Suppose that $S=\{-1,0,1\}, p_{0,1}=p_{-1,-1}=0$ and $p_{-1,0} \neq p_{1,0}$. Let $Y_{n}=\left|X_{n}\right|$, $n=0,1,2, \ldots$ In terms of the $p_{i, j}$, find

(i) $\mathbb{P}\left(Y_{n+1}=0 \mid Y_{n}=1, Y_{n-1}=0\right)$ and

(ii) $\mathbb{P}\left(Y_{n+1}=0 \mid Y_{n}=1, Y_{n-1}=1, Y_{n-2}=0\right)$.

What can you conclude about whether or not $\left(Y_{n}: n \geqslant 0\right)$ is a Markov chain?

comment

• # Paper 1, Section II, D

(a) Legendre's differential equation may be written

$\left(1-x^{2}\right) \frac{d^{2} y}{d x^{2}}-2 x \frac{d y}{d x}+n(n+1) y=0, \quad y(1)=1$

Show that for non-negative integer $n$, this equation has a solution $P_{n}(x)$ that is a polynomial of degree $n$. Find $P_{0}, P_{1}$ and $P_{2}$ explicitly.

(b) Laplace's equation in spherical coordinates for an axisymmetric function $U(r, \theta)$ (i.e. no $\phi$ dependence) is given by

$\frac{1}{r^{2}} \frac{\partial}{\partial r}\left(r^{2} \frac{\partial U}{\partial r}\right)+\frac{1}{r^{2} \sin \theta} \frac{\partial}{\partial \theta}\left(\sin \theta \frac{\partial U}{\partial \theta}\right)=0$

Use separation of variables to find the general solution for $U(r, \theta)$.

Find the solution $U(r, \theta)$ that satisfies the boundary conditions

\begin{aligned} &U(r, \theta) \rightarrow v_{0} r \cos \theta \quad \text { as } r \rightarrow \infty \\ &\frac{\partial U}{\partial r}=0 \quad \text { at } r=r_{0} \end{aligned}

where $v_{0}$ and $r_{0}$ are constants.

comment
• # Paper 2, Section I, D

(i) Calculate the Fourier series for the periodic extension on $\mathbb{R}$ of the function

$f(x)=x(1-x)$

defined on the interval $[0,1)$.

(ii) Explain why the Fourier series for the periodic extension of $f^{\prime}(x)$ can be obtained by term-by-term differentiation of the series for $f(x)$.

(iii) Let $G(x)$ be the Fourier series for the periodic extension of $f^{\prime}(x)$. Determine the value of $G(0)$ and explain briefly how it is related to the values of $f^{\prime}$.

comment
• # Paper 2, Section II, 16D

The Fourier transform $\tilde{f}$ of a function $f$ is defined as

$\tilde{f}(k)=\int_{-\infty}^{\infty} f(x) e^{-i k x} d x, \quad \text { so that } f(x)=\frac{1}{2 \pi} \int_{-\infty}^{\infty} \tilde{f}(k) e^{i k x} d k$

A Green's function $G\left(t, t^{\prime}, x, x^{\prime}\right)$ for the diffusion equation in one spatial dimension satisfies

$\frac{\partial G}{\partial t}-D \frac{\partial^{2} G}{\partial x^{2}}=\delta\left(t-t^{\prime}\right) \delta\left(x-x^{\prime}\right)$

(a) By applying a Fourier transform, show that the Fourier transform $\tilde{G}$ of this Green's function and the Green's function $G$ are

\begin{aligned} \tilde{G}\left(t, t^{\prime}, k, x^{\prime}\right) &=H\left(t-t^{\prime}\right) e^{-i k x^{\prime}} e^{-D k^{2}\left(t-t^{\prime}\right)} \\ G\left(t, t^{\prime}, x, x^{\prime}\right) &=\frac{H\left(t-t^{\prime}\right)}{\sqrt{4 \pi D\left(t-t^{\prime}\right)}} e^{-\frac{\left(x-x^{\prime}\right)^{2}}{4 D\left(t-t^{\prime}\right)}} \end{aligned}

where $H$ is the Heaviside function. [Hint: The Fourier transform $\tilde{F}$ of a Gaussian $F(x)=\frac{1}{\sqrt{4 \pi a}} e^{-\frac{x^{2}}{4 a}}, a=\mathrm{const}$, is given by $\left.\tilde{F}(k)=e^{-a k^{2}} .\right]$

(b) The analogous result for the Green's function for the diffusion equation in two spatial dimensions is

$G\left(t, t^{\prime}, x, x^{\prime}, y, y^{\prime}\right)=\frac{H\left(t-t^{\prime}\right)}{4 \pi D\left(t-t^{\prime}\right)} e^{-\frac{1}{4 D\left(t-t^{\prime}\right)}\left[\left(x-x^{\prime}\right)^{2}+\left(y-y^{\prime}\right)^{2}\right]}$

Use this Green's function to construct a solution for $t \geqslant 0$ to the diffusion equation

$\frac{\partial \Psi}{\partial t}-D\left(\frac{\partial^{2} \Psi}{\partial x^{2}}+\frac{\partial^{2} \Psi}{\partial y^{2}}\right)=p(t) \delta(x) \delta(y)$

with the initial condition $\Psi(0, x, y)=0$.

Now set

$p(t)= \begin{cases}p_{0}=\mathrm{const} & \text { for } \quad 0 \leqslant t \leqslant t_{0} \\ 0 & \text { for } \quad t>t_{0}\end{cases}$

Find the solution $\Psi(t, x, y)$ for $t>t_{0}$ in terms of the exponential integral defined by

$E i(-\eta)=-\int_{\eta}^{\infty} \frac{e^{-\lambda}}{\lambda} d \lambda$

Use the approximation $E i(-\eta) \approx \ln \eta+C$, valid for $\eta \ll 1$, to simplify this solution $\Psi(t, x, y)$. Here $C \approx 0.577$ is Euler's constant.

comment
• # Paper 3, Section I, D

Using the method of characteristics, solve the differential equation

$-y \frac{\partial u}{\partial x}+x \frac{\partial u}{\partial y}=0$

where $x, y \in \mathbb{R}$ and $u=\cos y^{2}$ on $x=0, y \geqslant 0$.

comment
• # Paper 3, Section II, 15D

Let $\mathcal{L}$ be a linear second-order differential operator on the interval $[0, \pi / 2]$. Consider the problem

$\mathcal{L} y(x)=f(x) ; \quad y(0)=y(\pi / 2)=0$

with $f(x)$ bounded in $[0, \pi / 2]$.

(i) How is a Green's function for this problem defined?

(ii) How is a solution $y(x)$ for this problem constructed from the Green's function?

(iii) Describe the continuity and jump conditions used in the construction of the Green's function.

(iv) Use the continuity and jump conditions to construct the Green's function for the differential equation

$\frac{d^{2} y}{d x^{2}}-\frac{d y}{d x}+\frac{5}{4} y=f(x)$

on the interval $[0, \pi / 2]$ with the boundary conditions $y(0)=0, y(\pi / 2)=0$ and an arbitrary bounded function $f(x)$. Use the Green's function to construct a solution $y(x)$ for the particular case $f(x)=e^{x / 2}$.

comment
• # Paper 4, Section I, D

Consider the ordinary differential equation

$\frac{d^{2} \psi}{d z^{2}}-\left[\frac{15 k^{2}}{4(k|z|+1)^{2}}-3 k \delta(z)\right] \psi=0$