Part IB, 2016, Paper 4

# Part IB, 2016, Paper 4

### Jump to course

Paper 4, Section I, G

comment(a) What does it mean to say that a mapping $f: X \rightarrow X$ from a metric space to itself is a contraction?

(b) State carefully the contraction mapping theorem.

(c) Let $\left(a_{1}, a_{2}, a_{3}\right) \in \mathbb{R}^{3}$. By considering the metric space $\left(\mathbb{R}^{3}, d\right)$ with

$d(x, y)=\sum_{i=1}^{3}\left|x_{i}-y_{i}\right|$

or otherwise, show that there exists a unique solution $\left(x_{1}, x_{2}, x_{3}\right) \in \mathbb{R}^{3}$ of the system of equations

$\begin{aligned} &x_{1}=a_{1}+\frac{1}{6}\left(\sin x_{2}+\sin x_{3}\right), \\ &x_{2}=a_{2}+\frac{1}{6}\left(\sin x_{1}+\sin x_{3}\right), \\ &x_{3}=a_{3}+\frac{1}{6}\left(\sin x_{1}+\sin x_{2}\right) . \end{aligned}$

Paper 4, Section II, G

comment(a) Let $V$ be a real vector space. What does it mean to say that two norms on $V$ are Lipschitz equivalent? Prove that every norm on $\mathbb{R}^{n}$ is Lipschitz equivalent to the Euclidean norm. Hence or otherwise, show that any linear map from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$ is continuous.

(b) Let $f: U \rightarrow V$ be a linear map between normed real vector spaces. We say that $f$ is bounded if there exists a constant $C$ such that for all $u \in U,\|f(u)\| \leqslant C\|u\|$. Show that $f$ is bounded if and only if $f$ is continuous.

(c) Let $\ell^{2}$ denote the space of sequences $\left(x_{n}\right)_{n \geqslant 1}$ of real numbers such that $\sum_{n \geqslant 1} x_{n}^{2}$ is convergent, with the norm $\left\|\left(x_{n}\right)_{n}\right\|=\left(\sum_{n \geqslant 1} x_{n}^{2}\right)^{1 / 2}$. Let $e_{m} \in \ell^{2}$ be the sequence $e_{m}=\left(x_{n}\right)_{n}$ with $x_{m}=1$ and $x_{n}=0$ if $n \neq m$. Let $w$ be the sequence $\left(2^{-n}\right)_{n}$. Show that the subset $\{w\} \cup\left\{e_{m} \mid m \geqslant 1\right\}$ is linearly independent. Let $V \subset \ell^{2}$ be the subspace it spans, and consider the linear map $f: V \rightarrow \mathbb{R}$ defined by

$f(w)=1, \quad f\left(e_{m}\right)=0 \quad \text { for all } m \geqslant 1 .$

Is $f$ continuous? Justify your answer.

Paper 4, Section I, G

commentState carefully Rouché's theorem. Use it to show that the function $z^{4}+3+e^{i z}$ has exactly one zero $z=z_{0}$ in the quadrant

$\{z \in \mathbb{C} \mid 0<\arg (z)<\pi / 2\}$

and that $\left|z_{0}\right| \leqslant \sqrt{2}$.

Paper 4, Section II, A

comment(a) Show that the Laplace transform of the Heaviside step function $H(t-a)$ is

$\int_{0}^{\infty} H(t-a) e^{-p t} d t=\frac{e^{-a p}}{p}$

for $a>0$.

(b) Derive an expression for the Laplace transform of the second derivative of a function $f(t)$ in terms of the Laplace transform of $f(t)$ and the properties of $f(t)$ at $t=0$.

(c) A bar of length $L$ has its end at $x=L$ fixed. The bar is initially at rest and straight. The end at $x=0$ is given a small fixed transverse displacement of magnitude $a$ at $t=0^{+}$. You may assume that the transverse displacement $y(x, t)$ of the bar satisfies the wave equation with some wave speed $c$, and so the tranverse displacement $y(x, t)$ is the solution to the problem:

$\begin{array}{cl} \frac{\partial^{2} y}{\partial t^{2}}=c^{2} \frac{\partial^{2} y}{\partial x^{2}} & \text { for } 0<x<L \text { and } t> \\ y(x, 0)=\frac{\partial y}{\partial t}(x, 0)=0 & \text { for } 0<x<L, \\ y(0, t)=a ; y(L, t)=0 & \text { for } t>0 . \end{array}$

(i) Show that the Laplace transform $Y(x, p)$ of $y(x, t)$, defined as

$Y(x, p)=\int_{0}^{\infty} y(x, t) e^{-p t} d t$

is given by

$Y(x, p)=\frac{a \sinh \left[\frac{p}{c}(L-x)\right]}{p \sinh \left[\frac{p L}{c}\right]}$

(ii) By use of the binomial theorem or otherwise, express $y(x, t)$ as an infinite series.

(iii) Plot the transverse displacement of the midpoint of the bar $y(L / 2, t)$ against time.

Paper 4, Section I, D

comment(a) Starting from Maxwell's equations, show that in a vacuum,

$\frac{1}{c^{2}} \frac{\partial^{2} \mathbf{E}}{\partial t^{2}}-\nabla^{2} \mathbf{E}=\mathbf{0} \quad \text { and } \quad \boldsymbol{\nabla} \cdot \mathbf{E}=0 \quad \text { where } \quad c=\sqrt{\frac{1}{\epsilon_{0} \mu_{0}}} .$

(b) Suppose that $\mathbf{E}=\frac{E_{0}}{\sqrt{2}}(1,1,0) \cos (k z-\omega t)$ where $E_{0}, k$ and $\omega$ are real constants.

(i) What are the wavevector and the polarisation? How is $\omega$ related to $k$ ?

(ii) Find the magnetic field $\mathbf{B}$.

(iii) Compute and interpret the time-averaged value of the Poynting vector, $\mathbf{S}=\frac{1}{\mu_{0}} \mathbf{E} \times \mathbf{B}$.

Paper 4, Section II, C

comment(a) Show that for an incompressible fluid, $\nabla \times \boldsymbol{\omega}=-\nabla^{2} \mathbf{u}$, where $\boldsymbol{\omega}$ is the flow vorticity,

(b) State the equation of motion for an inviscid flow of constant density in a rotating frame subject to gravity. Show that, on Earth, the local vertical component of the centrifugal force is small compared to gravity. Present a scaling argument to justify the linearisation of the Euler equations for sufficiently large rotation rates, and hence deduce the linearised version of the Euler equations in a rapidly rotating frame.

(c) Denoting the rotation rate of the frame as $\boldsymbol{\Omega}=\Omega \mathbf{e}_{z}$, show that the linearised Euler equations may be manipulated to obtain an equation for the velocity field $\mathbf{u}$ in the form

$\frac{\partial^{2} \nabla^{2} \mathbf{u}}{\partial t^{2}}+4 \Omega^{2} \frac{\partial^{2} \mathbf{u}}{\partial z^{2}}=\mathbf{0}$

(d) Assume that there exist solutions of the form $\mathbf{u}=\mathbf{U}_{0} \exp [i(\mathbf{k} \cdot \mathbf{x}-\omega t)]$. Show that $\omega=\pm 2 \Omega \cos \theta$ where the angle $\theta$ is to be determined.

Paper 4, Section II, F

commentLet $\alpha(s)=(f(s), g(s))$ be a simple curve in $\mathbb{R}^{2}$ parameterised by arc length with $f(s)>0$ for all $s$, and consider the surface of revolution $S$ in $\mathbb{R}^{3}$ defined by the parameterisation

$\sigma(u, v)=(f(u) \cos v, f(u) \sin v, g(u))$

(a) Calculate the first and second fundamental forms for $S$. Show that the Gaussian curvature of $S$ is given by

$K=-\frac{f^{\prime \prime}(u)}{f(u)}$

(b) Now take $f(s)=\cos s+2, g(s)=\sin s, 0 \leqslant s<2 \pi$. What is the integral of the Gaussian curvature over the surface of revolution $S$ determined by $f$ and $g$ ?

[You may use the Gauss-Bonnet theorem without proof.]

(c) Now suppose $S$ has constant curvature $K \equiv 1$, and suppose there are two points $P_{1}, P_{2} \in \mathbb{R}^{3}$ such that $S \cup\left\{P_{1}, P_{2}\right\}$ is a smooth closed embedded surface. Show that $S$ is a unit sphere, minus two antipodal points.

[Do not attempt to integrate an expression of the form $\sqrt{1-C^{2} \sin ^{2} u}$ when $C \neq 1$. Study the behaviour of the surface at the largest and smallest possible values of $u$.]

Paper 4, Section I, $2 \mathrm{E}$

commentGive the statement and the proof of Eisenstein's criterion. Use this criterion to show $x^{p-1}+x^{p-2}+\cdots+1$ is irreducible in $\mathbb{Q}[x]$ where $p$ is a prime.

Paper 4, Section II, E

commentLet $R$ be a Noetherian ring and let $M$ be a finitely generated $R$-module.

(a) Show that every submodule of $M$ is finitely generated.

(b) Show that each maximal element of the set

$\mathcal{A}=\{\operatorname{Ann}(m) \mid 0 \neq m \in M\}$

is a prime ideal. [Here, maximal means maximal with respect to inclusion, and $\operatorname{Ann}(m)=\{r \in R \mid r m=0\} .]$

(c) Show that there is a chain of submodules

$0=M_{0} \subseteq M_{1} \subseteq \cdots \subseteq M_{l}=M$

such that for each $0<i \leqslant l$ the quotient $M_{i} / M_{i-1}$ is isomorphic to $R / P_{i}$ for some prime ideal $P_{i}$.

Paper 4, Section I, F

commentFor which real numbers $x$ do the vectors

$(x, 1,1,1), \quad(1, x, 1,1), \quad(1,1, x, 1), \quad(1,1,1, x),$

not form a basis of $\mathbb{R}^{4}$ ? For each such value of $x$, what is the dimension of the subspace of $\mathbb{R}^{4}$ that they span? For each such value of $x$, provide a basis for the spanned subspace, and extend this basis to a basis of $\mathbb{R}^{4}$.

Paper 4, Section II, F

comment(a) Let $\alpha: V \rightarrow W$ be a linear transformation between finite dimensional vector spaces over a field $F=\mathbb{R}$ or $\mathbb{C}$.

Define the dual map of $\alpha$. Let $\delta$ be the dual map of $\alpha$. Given a subspace $U \subseteq V$, define the annihilator $U^{\circ}$ of $U$. Show that $(\operatorname{ker} \alpha)^{\circ}$ and the image of $\delta$ coincide. Conclude that the dimension of the image of $\alpha$ is equal to the dimension of the image of $\delta$. Show that $\operatorname{dim} \operatorname{ker}(\alpha)-\operatorname{dim} \operatorname{ker}(\delta)=\operatorname{dim} V-\operatorname{dim} W$.

(b) Now suppose in addition that $V, W$ are inner product spaces. Define the adjoint $\alpha^{*}$ of $\alpha$. Let $\beta: U \rightarrow V, \gamma: V \rightarrow W$ be linear transformations between finite dimensional inner product spaces. Suppose that the image of $\beta$ is equal to the kernel of $\gamma$. Then show that $\beta \beta^{*}+\gamma^{*} \gamma$ is an isomorphism.

Paper 4, Section I, H

commentConsider two boxes, labelled $\mathrm{A}$ and B. Initially, there are no balls in box $\mathrm{A}$ and $k$ balls in box B. Each minute later, one of the $k$ balls is chosen uniformly at random and is moved to the opposite box. Let $X_{n}$ denote the number of balls in box A at time $n$, so that $X_{0}=0$.

(a) Find the transition probabilities of the Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ and show that it is reversible in equilibrium.

(b) Find $\mathbb{E}(T)$, where $T=\inf \left\{n \geqslant 1: X_{n}=0\right\}$ is the next time that all $k$ balls are again in box $B$.

Paper 4, Section I, A

commentConsider the function $f(x)$ defined by

$f(x)=x^{2}, \text { for }-\pi<x<\pi$

Calculate the Fourier series representation for the $2 \pi$-periodic extension of this function. Hence establish that

$\frac{\pi^{2}}{6}=\sum_{n=1}^{\infty} \frac{1}{n^{2}}$

and that

$\frac{\pi^{2}}{12}=\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^{2}}$

Paper 4, Section II, B

commentLet $\mathcal{D}$ be a 2-dimensional region in $\mathbb{R}^{2}$ with boundary $\partial \mathcal{D}$. In this question you may assume Green's second identity:

$\int_{\mathcal{D}}\left(f \nabla^{2} g-g \nabla^{2} f\right) d A=\int_{\partial \mathcal{D}}\left(f \frac{\partial g}{\partial n}-g \frac{\partial f}{\partial n}\right) d l$

where $\frac{\partial}{\partial n}$ denotes the outward normal derivative on $\partial \mathcal{D}$, and $f$ and $g$ are suitably regular functions that include the free space Green's function in two dimensions. You may also assume that the free space Green's function for the Laplace equation in two dimensions is given by

$G_{0}\left(\boldsymbol{r}, \boldsymbol{r}_{0}\right)=\frac{1}{2 \pi} \log \left|\boldsymbol{r}-\boldsymbol{r}_{0}\right|$

(a) State the conditions required on a function $G\left(\boldsymbol{r}, \boldsymbol{r}_{\mathbf{0}}\right)$ for it to be a Dirichlet Green's function for the Laplace operator on $\mathcal{D}$. Suppose that $\nabla^{2} \psi=0$ on $\mathcal{D}$. Show that if $G$ is a Dirichlet Green's function for $\mathcal{D}$ then

$\psi\left(\boldsymbol{r}_{\mathbf{0}}\right)=\int_{\partial \mathcal{D}} \psi(\boldsymbol{r}) \frac{\partial}{\partial n} G\left(\boldsymbol{r}, \boldsymbol{r}_{\mathbf{0}}\right) d l \quad \text { for } \boldsymbol{r}_{\mathbf{0}} \in \mathcal{D}$

(b) Consider the Laplace equation $\nabla^{2} \phi=0$ in the quarter space

$\mathcal{D}=\{(x, y): x \geqslant 0 \text { and } y \geqslant 0\}$

with boundary conditions

$\phi(x, 0)=e^{-x^{2}}, \phi(0, y)=e^{-y^{2}} \text { and } \phi(x, y) \rightarrow 0 \text { as } \sqrt{x^{2}+y^{2}} \rightarrow \infty$

Using the method of images, show that the solution is given by

$\phi\left(x_{0}, y_{0}\right)=F\left(x_{0}, y_{0}\right)+F\left(y_{0}, x_{0}\right),$

where

$F\left(x_{0}, y_{0}\right)=\frac{4 x_{0} y_{0}}{\pi} \int_{0}^{\infty} \frac{t e^{-t^{2}}}{\left[\left(t-x_{0}\right)^{2}+y_{0}^{2}\right]\left[\left(t+x_{0}\right)^{2}+y_{0}^{2}\right]} d t$

Paper 4, Section II, E

comment(a) Let $X$ be a topological space. Define what is meant by a quotient of $X$ and describe the quotient topology on the quotient space. Give an example in which $X$ is Hausdorff but the quotient space is not Hausdorff.

(b) Let $T^{2}$ be the 2-dimensional torus considered as the quotient $\mathbb{R}^{2} / \mathbb{Z}^{2}$, and let $\pi: \mathbb{R}^{2} \rightarrow T^{2}$ be the quotient map.

(i) Let $B(u, r)$ be the open ball in $\mathbb{R}^{2}$ with centre $u$ and radius $r<1 / 2$. Show that $U=\pi(B(u, r))$ is an open subset of $T^{2}$ and show that $\pi^{-1}(U)$ has infinitely many connected components. Show each connected component is homeomorphic to $B(u, r)$.

(ii) Let $\alpha \in \mathbb{R}$ be an irrational number and let $L \subset \mathbb{R}^{2}$ be the line given by the equation $y=\alpha x$. Show that $\pi(L)$ is dense in $T^{2}$ but $\pi(L) \neq T^{2}$.

Paper 4, Section I, D

comment(a) Define the linear stability domain for a numerical method to solve $\mathbf{y}^{\prime}=\mathbf{f}(t, \mathbf{y})$. What is meant by an A-stable method?

(b) A two-stage Runge-Kutta scheme is given by

$\mathbf{k}_{1}=\mathbf{f}\left(t_{n}, \mathbf{y}_{n}\right), \quad \mathbf{k}_{2}=\mathbf{f}\left(t_{n}+\frac{h}{2}, \mathbf{y}_{n}+\frac{h}{2} \mathbf{k}_{1}\right), \quad \mathbf{y}_{n+1}=\mathbf{y}_{n}+h \mathbf{k}_{2},$

where $h$ is the step size and $t_{n}=n h$. Show that the order of this scheme is at least two. For this scheme, find the intersection of the linear stability domain with the real axis. Hence show that this method is not A-stable.

Paper 4, Section II, H

comment(a) What is the maximal flow problem in a network? Explain the Ford-Fulkerson algorithm. Prove that this algorithm terminates if the initial flow is set to zero and all arc capacities are rational numbers.

(b) Let $A=\left(a_{i, j}\right)_{i, j}$ be an $n \times n$ matrix. We say that $A$ is doubly stochastic if $0 \leqslant a_{i, j} \leqslant 1$ for $i, j$ and

$\begin{aligned} &\sum_{i=1}^{n} a_{i, j}=1 \text { for all } j \\ &\sum_{j=1}^{n} a_{i, j}=1 \text { for all } i \end{aligned}$

We say that $A$ is a permutation matrix if $a_{i, j} \in\{0,1\}$ for all $i, j$ and

for all $j$ there exists a unique $i$ such that $a_{i, j}=1$,

for all $i$ there exists a unique $j$ such that $a_{i, j}=1$.

Let $\mathcal{C}$ be the set of all $n \times n$ doubly stochastic matrices. Show that a matrix $A$ is an extreme point of $\mathcal{C}$ if and only if $A$ is a permutation matrix.

Paper 4, Section I, B

comment(a) Define the quantum orbital angular momentum operator $\hat{\boldsymbol{L}}=\left(\hat{L}_{1}, \hat{L}_{2}, \hat{L}_{3}\right)$ in three dimensions, in terms of the position and momentum operators.

(b) Show that $\left[\hat{L}_{1}, \hat{L}_{2}\right]=i \hbar \hat{L}_{3}$. [You may assume that the position and momentum operators satisfy the canonical commutation relations.]

(c) Let $\hat{L}^{2}=\hat{L}_{1}^{2}+\hat{L}_{2}^{2}+\hat{L}_{3}^{2}$. Show that $\hat{L}_{1}$ commutes with $\hat{L}^{2}$.

[In this part of the question you may additionally assume without proof the permuted relations $\left[\hat{L}_{2}, \hat{L}_{3}\right]=i \hbar \hat{L}_{1}$ and $\left.\left[\hat{L}_{3}, \hat{L}_{1}\right]=i \hbar \hat{L}_{2} .\right]$

[Hint: It may be useful to consider the expression $[\hat{A}, \hat{B}] \hat{B}+\hat{B}[\hat{A}, \hat{B}]$ for suitable operators $\hat{A}$ and $\hat{B}$.]

(d) Suppose that $\psi_{1}(x, y, z)$ and $\psi_{2}(x, y, z)$ are normalised eigenstates of $\hat{L}_{1}$ with eigenvalues $\hbar$ and $-\hbar$ respectively. Consider the wavefunction

$\psi=\frac{1}{2} \psi_{1} \cos \omega t+\frac{\sqrt{3}}{2} \psi_{2} \sin \omega t$

with $\omega$ being a positive constant. Find the earliest time $t_{0}>0$ such that the expectation value of $\hat{L}_{1}$ in $\psi$ is zero.

Paper 4, Section II, H

commentConsider the linear regression model

$Y_{i}=\alpha+\beta x_{i}+\varepsilon_{i}$

for $i=1, \ldots, n$, where the non-zero numbers $x_{1}, \ldots, x_{n}$ are known and are such that $x_{1}+\ldots+x_{n}=0$, the independent random variables $\varepsilon_{1}, \ldots, \varepsilon_{n}$ have the $N\left(0, \sigma^{2}\right)$ distribution, and the parameters $\alpha, \beta$ and $\sigma^{2}$ are unknown.

(a) Let $(\hat{\alpha}, \hat{\beta})$ be the maximum likelihood estimator of $(\alpha, \beta)$. Prove that for each $i$, the random variables $\hat{\alpha}, \hat{\beta}$ and $Y_{i}-\hat{\alpha}-\hat{\beta} x_{i}$ are uncorrelated. Using standard facts about the multivariate normal distribution, prove that $\hat{\alpha}, \hat{\beta}$ and $\sum_{i=1}^{n}\left(Y_{i}-\hat{\alpha}-\hat{\beta} x_{i}\right)^{2}$ are independent.

(b) Find the critical region of the generalised likelihood ratio test of size $5 \%$ for testing $H_{0}: \alpha=0$ versus $H_{1}: \alpha \neq 0$. Prove that the power function of this test is of the form $w\left(\alpha, \beta, \sigma^{2}\right)=g(\alpha / \sigma)$ for some function $g$. [You are not required to find $g$ explicitly.]

Paper 4, Section II, C

commentA fish swims in the ocean along a straight line with speed $V(t)$. The fish starts its journey from rest (zero velocity at $t=0$ ) and, during a given time $T$, swims subject to the constraint that the total distance travelled is $L$. The energy cost for swimming is $a V^{2}+b \dot{V}^{2}$ per unit time, where $a, b \geqslant 0$ are known and $a^{2}+b^{2} \neq 0$.

(a) Derive the Euler-Lagrange condition on $V(t)$ for the journey to have minimum energetic cost.

(b) In the case $a \neq 0, b \neq 0$ solve for $V(t)$ assuming that the fish starts at $t=0$ with zero acceleration (in addition to zero velocity).

(c) In the case $a=0$, the fish can decide between three different boundary conditions for its journey. In addition to starting with zero velocity, it can:

(1) start at $t=0$ with zero acceleration;

(2) end at $t=T$ with zero velocity; or

(3) end at $t=T$ with zero acceleration.

Which of $(1),(2)$ or (3) is the best minimal-energy cost strategy?