Part IB, 2011, Paper 1

# Part IB, 2011, Paper 1

### Jump to course

Paper 1, Section II, E

commentWhat is meant by saying that a sequence of functions $f_{n}$ converges uniformly to a function $f$ ?

Let $f_{n}$ be a sequence of differentiable functions on $[a, b]$ with $f_{n}^{\prime}$ continuous and such that $f_{n}\left(x_{0}\right)$ converges for some point $x_{0} \in[a, b]$. Assume in addition that $f_{n}^{\prime}$ converges uniformly on $[a, b]$. Prove that $f_{n}$ converges uniformly to a differentiable function $f$ on $[a, b]$ and $f^{\prime}(x)=\lim _{n \rightarrow \infty} f_{n}^{\prime}(x)$ for all $x \in[a, b]$. [You may assume that the uniform limit of continuous functions is continuous.]

Show that the series

$\zeta(s)=\sum_{n=1}^{\infty} \frac{1}{n^{s}}$

converges for $s>1$ and is uniformly convergent on $[1+\varepsilon, \infty)$ for any $\varepsilon>0$. Show that $\zeta(s)$ is differentiable on $(1, \infty)$ and

$\zeta^{\prime}(s)=-\sum_{n=2}^{\infty} \frac{\log n}{n^{s}}$

[You may use the Weierstrass $M$-test provided it is clearly stated.]

Paper 1, Section I, A

commentDerive the Cauchy-Riemann equations satisfied by the real and imaginary parts of a complex analytic function $f(z)$.

If $|f(z)|$ is constant on $|z|<1$, prove that $f(z)$ is constant on $|z|<1$.

Paper 1, Section II, A

comment(i) Let $-1<\alpha<0$ and let

$\begin{aligned} &f(z)=\frac{\log (z-\alpha)}{z} \text { where }-\pi \leqslant \arg (z-\alpha)<\pi \\ &g(z)=\frac{\log z}{z} \quad \text { where }-\pi \leqslant \arg (z)<\pi \end{aligned}$

Here the logarithms take their principal values. Give a sketch to indicate the positions of the branch cuts implied by the definitions of $f(z)$ and $g(z)$.

(ii) Let $h(z)=f(z)-g(z)$. Explain why $h(z)$ is analytic in the annulus $1 \leqslant|z| \leqslant R$ for any $R>1$. Obtain the first three terms of the Laurent expansion for $h(z)$ around $z=0$ in this annulus and hence evaluate

$\oint_{|z|=2} h(z) d z$

Paper 1, Section II, D

commentStarting from the relevant Maxwell equation, derive Gauss's law in integral form.

Use Gauss's law to obtain the potential at a distance $r$ from an infinite straight wire with charge $\lambda$ per unit length.

Write down the potential due to two infinite wires parallel to the $z$-axis, one at $x=y=0$ with charge $\lambda$ per unit length and the other at $x=0, y=d$ with charge $-\lambda$ per unit length.

Find the potential and the electric field in the limit $d \rightarrow 0$ with $\lambda d=p$ where $p$ is fixed. Sketch the equipotentials and the electric field lines.

Paper 1, Section I, B

commentInviscid fluid is contained in a square vessel with sides of length $\pi L$ lying between $x=0, \pi L, y=0, \pi L$. The base of the container is at $z=-H$ where $H \gg L$ and the horizontal surface is at $z=0$ when the fluid is at rest. The variation of pressure of the air above the fluid may be neglected.

Small amplitude surface waves are excited in the vessel.

(i) Now let $H \rightarrow \infty$. Explain why on dimensional grounds the frequencies $\omega$ of such waves are of the form

$\omega=\left(\frac{\gamma g}{L}\right)^{\frac{1}{2}}$

for some positive dimensionless constants $\gamma$, where $g$ is the gravitational acceleration.

It is given that the velocity potential $\phi$ is of the form

$\phi(x, y, z) \approx C \cos (m x / L) \cos (n y / L) \mathrm{e}^{\gamma z / L}$

where $m$ and $n$ are integers and $C$ is a constant.

(ii) Why do cosines, rather than sines, appear in this expression?

(iii) Give an expression for $\gamma$ in terms of $m$ and $n$.

(iv) Give all possible values that $\gamma^{2}$ can take between 1 and 10 inclusive. How many different solutions for $\phi$ correspond to each of these values of $\gamma^{2} ?$

Paper 1, Section II, B

commentA spherical bubble in an incompressible fluid of density $\rho$ has radius $a(t)$. Write down an expression for the velocity field at a radius $R \geqslant a$.

The pressure far from the bubble is $p_{\infty}$. What is the pressure at radius $R$ ?

Find conditions on $a$ and its time derivatives that ensure that the maximum pressure in the fluid is reached at a radius $R_{\max }$ where $a<R_{\max }<\infty$. Give an expression for this maximum pressure when the conditions hold.

Give the most general form of $a(t)$ that ensures that the pressure at $R=a(t)$ is $p_{\infty}$ for all time.

Paper 1, Section I, F

commentSuppose that $H \subseteq \mathbb{C}$ is the upper half-plane, $H=\{x+i y \mid x, y \in \mathbb{R}, y>0\}$. Using the Riemannian metric $d s^{2}=\frac{d x^{2}+d y^{2}}{y^{2}}$, define the length of a curve $\gamma$ and the area of a region $\Omega$ in $H$.

Find the area of

$\Omega=\left\{x+i y|| x \mid \leqslant \frac{1}{2}, x^{2}+y^{2} \geqslant 1\right\}$

Paper 1, Section II, F

comment(i) Suppose that $G$ is a finite group of order $p^{n} r$, where $p$ is prime and does not divide $r$. Prove the first Sylow theorem, that $G$ has at least one subgroup of order $p^{n}$, and state the remaining Sylow theorems without proof.

(ii) Suppose that $p, q$ are distinct primes. Show that there is no simple group of order $p q$.

Paper 1, Section I, G

comment(i) State the rank-nullity theorem for a linear map between finite-dimensional vector spaces.

(ii) Show that a linear transformation $f: V \rightarrow V$ of a finite-dimensional vector space $V$ is bijective if it is injective or surjective.

(iii) Let $V$ be the $\mathbb{R}$-vector space $\mathbb{R}[X]$ of all polynomials in $X$ with coefficients in $\mathbb{R}$. Give an example of a linear transformation $f: V \rightarrow V$ which is surjective but not bijective.

Paper 1, Section II, G

commentLet $V, W$ be finite-dimensional vector spaces over a field $F$ and $f: V \rightarrow W$ a linear map.

(i) Show that $f$ is injective if and only if the image of every linearly independent subset of $V$ is linearly independent in $W$.

(ii) Define the dual space $V^{*}$ of $V$ and the dual map $f^{*}: W^{*} \rightarrow V^{*}$.

(iii) Show that $f$ is surjective if and only if the image under $f^{*}$ of every linearly independent subset of $W^{*}$ is linearly independent in $V^{*}$.

Paper 1, Section II, H

commentLet $P=\left(p_{i j}\right)_{i, j \in S}$ be the transition matrix for an irreducible Markov chain on the finite state space $S$.

(i) What does it mean to say $\pi$ is the invariant distribution for the chain?

(ii) What does it mean to say the chain is in detailed balance with respect to $\pi$ ?

(iii) A symmetric random walk on a connected finite graph is the Markov chain whose state space is the set of vertices of the graph and whose transition probabilities are

$p_{i j}= \begin{cases}1 / D_{i} & \text { if } j \text { is adjacent to } i \\ 0 & \text { otherwise }\end{cases}$

where $D_{i}$ is the number of vertices adjacent to vertex $i$. Show that the random walk is in detailed balance with respect to its invariant distribution.

(iv) Let $\pi$ be the invariant distribution for the transition matrix $P$, and define an inner product for vectors $x, y \in \mathbb{R}^{S}$ by the formula

$\langle x, y\rangle=\sum_{i \in S} x_{i} \pi_{i} y_{i}$

Show that the equation

$\langle x, P y\rangle=\langle P x, y\rangle$

holds for all vectors $x, y \in \mathbb{R}^{S}$ if and only if the chain is in detailed balance with respect to $\pi$. [Here $z \in \mathbb{R}^{S}$ means $z=\left(z_{i}\right)_{i \in S}$.]

Paper 1, Section II, A

commentLet $f(t)$ be a real function defined on an interval $(-T, T)$ with Fourier series

$f(t)=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}\left(a_{n} \cos \frac{n \pi t}{T}+b_{n} \sin \frac{n \pi t}{T}\right)$

State and prove Parseval's theorem for $f(t)$ and its Fourier series. Write down the formulae for $a_{0}, a_{n}$ and $b_{n}$ in terms of $f(t), \cos \frac{n \pi t}{T}$ and $\sin \frac{n \pi t}{T}$.

Find the Fourier series of the square wave function defined on $(-\pi, \pi)$ by

$g(t)=\left\{\begin{array}{lr} 0 & -\pi<t \leqslant 0 \\ 1 & 0<t<\pi \end{array}\right.$

Hence evaluate

$\sum_{k=0}^{\infty} \frac{(-1)^{k}}{(2 k+1)}$

Using some of the above results evaluate

$\sum_{k=0}^{\infty} \frac{1}{(2 k+1)^{2}}$

What is the sum of the Fourier series for $g(t)$ at $t=0$ ? Comment on your answer.

Paper 1, Section II, G

commentLet $X$ be a metric space with the distance function $d: X \times X \rightarrow \mathbb{R}$. For a subset $Y$ of $X$, its diameter is defined as $\delta(Y):=\sup \left\{d\left(y, y^{\prime}\right) \mid y, y^{\prime} \in Y\right\}$.

Show that, if $X$ is compact and $\left\{U_{\lambda}\right\}_{\lambda \in \Lambda}$ is an open covering of $X$, then there exists an $\epsilon>0$ such that every subset $Y \subset X$ with $\delta(Y)<\epsilon$ is contained in some $U_{\lambda}$.

Paper 1, Section I, B

commentOrthogonal monic polynomials $p_{0}, p_{1}, \ldots, p_{n}, \ldots$ are defined with respect to the inner product $\langle p, q\rangle=\int_{-1}^{1} w(x) p(x) q(x) d x$, where $p_{n}$ is of degree $n$. Show that such polynomials obey a three-term recurrence relation

$p_{n+1}(x)=\left(x-\alpha_{n}\right) p_{n}(x)-\beta_{n} p_{n-1}(x)$

for appropriate choices of $\alpha_{n}$ and $\beta_{n}$.

Now suppose that $w(x)$ is an even function of $x$. Show that the $p_{n}$ are even or odd functions of $x$ according to whether $n$ is even or odd.

Paper 1, Section II, B

commentConsider a function $f(x)$ defined on the domain $x \in[0,1]$. Find constants $\alpha, \beta$, so that for any fixed $\xi \in[0,1]$,

$f^{\prime \prime}(\xi)=\alpha f(0)+\beta f^{\prime}(0)+\gamma f(1)$

is exactly satisfied for polynomials of degree less than or equal to two.

By using the Peano kernel theorem, or otherwise, show that

$\begin{aligned} f^{\prime}(\xi)-f^{\prime}(0)-\xi(\alpha f(0)&\left.+\beta f^{\prime}(0)+\gamma f(1)\right)=\int_{0}^{\xi}(\xi-\theta) H_{1}(\theta) f^{\prime \prime \prime}(\theta) d \theta \\ &+\int_{0}^{\xi} \theta H_{2}(\theta) f^{\prime \prime \prime}(\theta) d \theta+\int_{\xi}^{1} \xi H_{2}(\theta) f^{\prime \prime \prime}(\theta) d \theta \end{aligned}$

where $H_{1}(\theta)=1-(1-\theta)^{2} \geqslant 0, H_{2}(\theta)=-(1-\theta)^{2} \leqslant 0$. Thus show that

$\left|f^{\prime}(\xi)-f^{\prime}(0)-\xi\left(\alpha f(0)+\beta f^{\prime}(0)+\gamma f(1)\right)\right| \leqslant\left.\frac{1}{6}\left(2 \xi-3 \xi^{2}+4 \xi^{3}-\xi^{4}\right)|| f^{\prime \prime \prime}\right|_{\infty} .$

Paper 1, Section I, H

commentSuppose that $A x \leqslant b$ and $x \geqslant 0$ and $A^{T} y \geqslant c$ and $y \geqslant 0$ where $x$ and $c$ are $n$-dimensional column vectors, $y$ and $b$ are $m$-dimensional column vectors, and $A$ is an $m \times n$ matrix. Here, the vector inequalities are interpreted component-wise.

(i) Show that $c^{T} x \leqslant b^{T} y$.

(ii) Find the maximum value of

$\begin{aligned} 6 x_{1}+8 x_{2}+3 x_{3} \quad \text { subject to } & 2 x_{1}+4 x_{2}+x_{3} \leqslant 10 \\ & 3 x_{1}+4 x_{2}+3 x_{3} \leqslant 6 \\ & x_{1}, x_{2}, x_{3} \geqslant 0 \end{aligned}$

You should state any results from the course used in your solution.

Paper 1, Section II, C

commentFor a quantum mechanical particle moving freely on a circle of length $2 \pi$, the wavefunction $\psi(t, x)$ satisfies the Schrödinger equation

$i \hbar \frac{\partial \psi}{\partial t}=-\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \psi}{\partial x^{2}}$

on the interval $0 \leqslant x \leqslant 2 \pi$, and also the periodicity conditions $\psi(t, 2 \pi)=\psi(t, 0)$, and $\frac{\partial \psi}{\partial x}(t, 2 \pi)=\frac{\partial \psi}{\partial x}(t, 0)$. Find the allowed energy levels of the particle, and their degeneracies.

The current is defined as

$j=\frac{i \hbar}{2 m}\left(\psi{\frac{\partial \psi^{*}}{\partial x}}^{*}-\psi^{*} \frac{\partial \psi}{\partial x}\right)$

where $\psi$ is a normalized state. Write down the general normalized state of the particle when it has energy $2 \hbar^{2} / m$, and show that in any such state the current $j$ is independent of $x$ and $t$. Find a state with this energy for which the current has its maximum positive value, and find a state with this energy for which the current vanishes.

Paper 1, Section I, $\mathbf{7 H} \quad$

commentConsider the experiment of tossing a coin $n$ times. Assume that the tosses are independent and the coin is biased, with unknown probability $p$ of heads and $1-p$ of tails. A total of $X$ heads is observed.

(i) What is the maximum likelihood estimator $\widehat{p}$ of $p$ ?

Now suppose that a Bayesian statistician has the $\operatorname{Beta}(M, N)$ prior distribution for $p$.

(ii) What is the posterior distribution for $p$ ?

(iii) Assuming the loss function is $L(p, a)=(p-a)^{2}$, show that the statistician's point estimate for $p$ is given by

$\frac{M+X}{M+N+n}$

[The $\operatorname{Beta}(M, N)$ distribution has density $\frac{\Gamma(M+N)}{\Gamma(M) \Gamma(N)} x^{M-1}(1-x)^{N-1}$ for $0<x<1$ and $\left.\operatorname{mean} \frac{M}{M+N} .\right]$

Paper 1, Section II, H

commentLet $X_{1}, \ldots, X_{n}$ be independent random variables with probability mass function $f(x ; \theta)$, where $\theta$ is an unknown parameter.

(i) What does it mean to say that $T$ is a sufficient statistic for $\theta$ ? State, but do not prove, the factorisation criterion for sufficiency.

(ii) State and prove the Rao-Blackwell theorem.

Now consider the case where $f(x ; \theta)=\frac{1}{x !}(-\log \theta)^{x} \theta$ for non-negative integer $x$ and $0<\theta<1$.

(iii) Find a one-dimensional sufficient statistic $T$ for $\theta$.

(iv) Show that $\tilde{\theta}=\mathbb{\prod}_{\left\{X_{1}=0\right\}}$ is an unbiased estimator of $\theta$.

(v) Find another unbiased estimator $\widehat{\theta}$which is a function of the sufficient statistic $T$ and that has smaller variance than $\tilde{\theta}$. You may use the following fact without proof: $X_{1}+\cdots+X_{n}$ has the Poisson distribution with parameter $-n \log \theta$.

Paper 1, Section I, D

comment(i) Write down the Euler-Lagrange equations for the volume integral

$\int_{V}(\nabla u \cdot \nabla u+12 u) d V$

where $V$ is the unit ball $x^{2}+y^{2}+z^{2} \leqslant 1$, and verify that the function $u(x, y, z)=x^{2}+y^{2}+z^{2}$ gives a stationary value of the integral subject to the condition $u=1$ on the boundary.

(ii) Write down the Euler-Lagrange equations for the integral

$\int_{0}^{1}\left(\dot{x}^{2}+\dot{y}^{2}+4 x+4 y\right) d t$

where the dot denotes differentiation with respect to $t$, and verify that the functions $x(t)=t^{2}, y(t)=t^{2}$ give a stationary value of the integral subject to the boundary conditions $x(0)=y(0)=0$ and $x(1)=y(1)=1$.