Part IB, 2014, Paper 3

# Part IB, 2014, Paper 3

### Jump to course

Paper 3, Section I, F

commentLet $U \subset \mathbb{R}^{n}$ be an open set and let $f: U \rightarrow \mathbb{R}$ be a differentiable function on $U$ such that $\left\|\left.D f\right|_{x}\right\| \leqslant M$ for some constant $M$ and all $x \in U$, where $\left\|\left.D f\right|_{x}\right\|$ denotes the operator norm of the linear map $\left.D f\right|_{x}$. Let $[a, b]=\{t a+(1-t) b: 0 \leqslant t \leqslant 1\}\left(a, b, \in \mathbb{R}^{n}\right)$ be a straight-line segment contained in $U$. Prove that $|f(b)-f(a)| \leqslant M\|b-a\|$, where $\|\cdot\|$ denotes the Euclidean norm on $\mathbb{R}^{n}$.

Prove that if $U$ is an open ball and $\left.D f\right|_{x}=0$ for each $x \in U$, then $f$ is constant on $U$.

Paper 3, Section II, F

commentLet $f_{n}, n=1,2, \ldots$, be continuous functions on an open interval $(a, b)$. Prove that if the sequence $\left(f_{n}\right)$ converges to $f$ uniformly on $(a, b)$ then the function $f$ is continuous on $(a, b)$.

If instead $\left(f_{n}\right)$ is only known to converge pointwise to $f$ and $f$ is continuous, must $\left(f_{n}\right)$ be uniformly convergent? Justify your answer.

Suppose that a function $f$ has a continuous derivative on $(a, b)$ and let

$g_{n}(x)=n\left(f\left(x+\frac{1}{n}\right)-f(x)\right)$

Stating clearly any standard results that you require, show that the functions $g_{n}$ converge uniformly to $f^{\prime}$ on each interval $[\alpha, \beta] \subset(a, b)$.

Paper 3, Section II, G

commentState the Residue Theorem precisely.

Let $D$ be a star-domain, and let $\gamma$ be a closed path in $D$. Suppose that $f$ is a holomorphic function on $D$, having no zeros on $\gamma$. Let $N$ be the number of zeros of $f$ inside $\gamma$, counted with multiplicity (i.e. order of zero and winding number). Show that

$N=\frac{1}{2 \pi i} \int_{\gamma} \frac{f^{\prime}(z)}{f(z)} d z$

[The Residue Theorem may be used without proof.]

Now suppose that $g$ is another holomorphic function on $D$, also having no zeros on $\gamma$ and with $|g(z)|<|f(z)|$ on $\gamma$. Explain why, for any $0 \leqslant t \leqslant 1$, the expression

$I(t)=\int_{\gamma} \frac{f^{\prime}(z)+\operatorname{tg}^{\prime}(z)}{f(z)+\operatorname{tg}(z)} d z$

is well-defined. By considering the behaviour of the function $I(t)$ as $t$ varies, deduce Rouché's Theorem.

For each $n$, let $p_{n}$ be the polynomial $\sum_{k=0}^{n} \frac{z^{k}}{k !}$. Show that, as $n$ tends to infinity, the smallest modulus of the roots of $p_{n}$ also tends to infinity.

[You may assume any results on convergence of power series, provided that they are stated clearly.]

Paper 3, Section I, B

commentFind the most general cubic form

$u(x, y)=a x^{3}+b x^{2} y+c x y^{2}+d y^{3}$

which satisfies Laplace's equation, where $a, b, c$ and $d$ are all real. Hence find an analytic function $f(z)=f(x+i y)$ which has such a $u$ as its real part.

Paper 3, Section II, A

comment(i) Consider charges $-q$ at $\pm \mathbf{d}$ and $2 q$ at $(0,0,0)$. Write down the electric potential.

(ii) Take $\mathbf{d}=(0,0, d)$. A quadrupole is defined in the limit that $q \rightarrow \infty, d \rightarrow 0$ such that $q d^{2}$ tends to a constant $p$. Find the quadrupole's potential, showing that it is of the form

$\phi(\mathbf{r})=A \frac{\left(r^{2}+C z^{D}\right)}{r^{B}}$

where $r=|\mathbf{r}|$. Determine the constants $A, B, C$ and $D$.

(iii) The quadrupole is fixed at the origin. At time $t=0$ a particle of charge $-Q(Q$ has the same sign as $q)$ and mass $m$ is at $(1,0,0)$ travelling with velocity $d \mathbf{r} / d t=(-\kappa, 0,0)$, where

$\kappa=\sqrt{\frac{Q p}{2 \pi \epsilon_{0} m}} .$

Neglecting gravity, find the time taken for the particle to reach the quadrupole in terms of $\kappa$, given that the force on the particle is equal to $m d^{2} \mathbf{r} / d t^{2}$.

Paper 3, Section II, B

commentA bubble of gas occupies the spherical region $r \leqslant R(t)$, and an incompressible irrotational liquid of constant density $\rho$ occupies the outer region $r \geqslant R$, such that as $r \rightarrow \infty$ the liquid is at rest with constant pressure $p_{\infty}$. Briefly explain why it is appropriate to use a velocity potential $\phi(r, t)$ to describe the liquid velocity u.

By applying continuity of velocity across the gas-liquid interface, show that the liquid pressure (for $r \geqslant R$ ) satisfies

$\frac{p}{\rho}+\frac{1}{2}\left(\frac{R^{2} \dot{R}}{r^{2}}\right)^{2}-\frac{1}{r} \frac{d}{d t}\left(R^{2} \dot{R}\right)=\frac{p_{\infty}}{\rho}, \quad \text { where } \dot{R}=\frac{d R}{d t} .$

Show that the excess pressure $p_{s}-p_{\infty}$ at the bubble surface $r=R$ is

$p_{s}-p_{\infty}=\frac{\rho}{2}\left(3 \dot{R}^{2}+2 R \ddot{R}\right), \quad \text { where } \ddot{R}=\frac{d^{2} R}{d t^{2}}$

and hence that

$p_{s}-p_{\infty}=\frac{\rho}{2 R^{2}} \frac{d}{d R}\left(R^{3} \dot{R}^{2}\right)$

The pressure $p_{g}(t)$ inside the gas bubble satisfies the equation of state

$p_{g} V^{4 / 3}=C$

where $C$ is a constant, and $V(t)$ is the bubble volume. At time $t=0$ the bubble is at rest with radius $R=a$. If the bubble then expands and comes to rest at $R=2 a$, determine the required gas pressure $p_{0}$ at $t=0$ in terms of $p_{\infty}$.

[You may assume that there is contact between liquid and gas for all time, that all motion is spherically symmetric about the origin $r=0$, and that there is no body force. You may also assume Bernoulli's integral of the equation of motion to determine the liquid pressure

$\frac{p}{\rho}+\frac{\partial \phi}{\partial t}+\frac{1}{2}|\nabla \phi|^{2}=A(t)$

where $\phi(r, t)$ is the velocity potential.]

Paper 3, Section I, F

commentLet $f(x)=A x+b$ be an isometry $\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$, where $A$ is an $n \times n$ matrix and $b \in \mathbb{R}^{n}$. What are the possible values of $\operatorname{det} A$ ?

Let $I$ denote the $n \times n$ identity matrix. Show that if $n=2$ and $\operatorname{det} A>0$, but $A \neq I$, then $f$ has a fixed point. Must $f$ have a fixed point if $n=3$ and $\operatorname{det} A>0$, but $A \neq I ?$ Justify your answer.

Paper 3, Section II, F

commentLet $\mathcal{T}$ be a decomposition of the two-dimensional sphere into polygonal domains, with every polygon having at least three edges. Let $V, E$, and $F$ denote the numbers of vertices, edges and faces of $\mathcal{T}$, respectively. State Euler's formula. Prove that $2 E \geqslant 3 F$.

Suppose that at least three edges meet at every vertex of $\mathcal{T}$. Let $F_{n}$ be the number of faces of $\mathcal{T}$ that have exactly $n$ edges $(n \geqslant 3)$ and let $V_{m}$ be the number of vertices at which exactly $m$ edges meet $(m \geqslant 3)$. Is it possible for $\mathcal{T}$ to have $V_{3}=F_{3}=0$ ? Justify your answer.

By expressing $6 F-\sum_{n} n F_{n}$ in terms of the $V_{j}$, or otherwise, show that $\mathcal{T}$ has at least four faces that are triangles, quadrilaterals and/or pentagons.

Paper 3, Section I, E

commentState and prove Hilbert's Basis Theorem.

Paper 3, Section II, E

commentLet $R$ be a ring, $M$ an $R$-module and $S=\left\{m_{1}, \ldots, m_{k}\right\}$ a subset of $M$. Define what it means to say $S$ spans $M$. Define what it means to say $S$ is an independent set.

We say $S$ is a basis for $M$ if $S$ spans $M$ and $S$ is an independent set. Prove that the following two statements are equivalent.

$S$ is a basis for $M$.

Every element of $M$ is uniquely expressible in the form $r_{1} m_{1}+\cdots+r_{k} m_{k}$ for some $r_{1}, \ldots, r_{k} \in R$.

We say $S$ generates $M$ freely if $S$ spans $M$ and any map $\Phi: S \rightarrow N$, where $N$ is an $R$-module, can be extended to an $R$-module homomorphism $\Theta: M \rightarrow N$. Prove that $S$ generates $M$ freely if and only if $S$ is a basis for $M$.

Let $M$ be an $R$-module. Are the following statements true or false? Give reasons.

(i) If $S$ spans $M$ then $S$ necessarily contains an independent spanning set for $M$.

(ii) If $S$ is an independent subset of $M$ then $S$ can always be extended to a basis for $M$.

Paper 3, Section II, G

commentLet $q$ be a nonsingular quadratic form on a finite-dimensional real vector space $V$. Prove that we may write $V=P \bigoplus N$, where the restriction of $q$ to $P$ is positive definite, the restriction of $q$ to $N$ is negative definite, and $q(x+y)=q(x)+q(y)$ for all $x \in P$ and $y \in N$. [No result on diagonalisability may be assumed.]

Show that the dimensions of $P$ and $N$ are independent of the choice of $P$ and $N$. Give an example to show that $P$ and $N$ are not themselves uniquely defined.

Find such a decomposition $V=P \bigoplus N$ when $V=\mathbb{R}^{3}$ and $q$ is the quadratic form $q((x, y, z))=x^{2}+2 y^{2}-2 x y-2 x z$

Paper 3, Section I, H

commentLet $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $S$. For $i, j$ in $S$ let $p_{i, j}(n)$ denote the $n$-step transition probability $\mathbb{P}\left(X_{n}=j \mid X_{0}=i\right)$.

(i) Express the $(m+n)$-step transition probability $p_{i, j}(m+n)$ in terms of the $n$-step and $m$-step transition probabilities.

(ii) Write $i \rightarrow j$ if there exists $n \geqslant 0$ such that $p_{i, j}(n)>0$, and $i \leftrightarrow j$ if $i \rightarrow j$ and $j \rightarrow i$. Prove that if $i \leftrightarrow j$ and $i \neq j$ then either both $i$ and $j$ are recurrent or both $i$ and $j$ are transient. [You may assume that a state $i$ is recurrent if and only if $\sum_{n=0}^{\infty} p_{i, i}(n)=\infty$, and otherwise $i$ is transient.]

(iii) A Markov chain has state space $\{0,1,2,3\}$ and transition matrix

$\left(\begin{array}{cccc} \frac{1}{2} & \frac{1}{3} & 0 & \frac{1}{6} \\ 0 & \frac{3}{4} & 0 & \frac{1}{4} \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array}\right)$

For each state $i$, determine whether $i$ is recurrent or transient. [Results from the course may be quoted without proof, provided they are clearly stated.]

Paper 3, Section I, D

commentUsing the method of characteristics, solve the differential equation

$-y \frac{\partial u}{\partial x}+x \frac{\partial u}{\partial y}=0$

where $x, y \in \mathbb{R}$ and $u=\cos y^{2}$ on $x=0, y \geqslant 0$.

Paper 3, Section II, 15D

commentLet $\mathcal{L}$ be a linear second-order differential operator on the interval $[0, \pi / 2]$. Consider the problem

$\mathcal{L} y(x)=f(x) ; \quad y(0)=y(\pi / 2)=0$

with $f(x)$ bounded in $[0, \pi / 2]$.

(i) How is a Green's function for this problem defined?

(ii) How is a solution $y(x)$ for this problem constructed from the Green's function?

(iii) Describe the continuity and jump conditions used in the construction of the Green's function.

(iv) Use the continuity and jump conditions to construct the Green's function for the differential equation

$\frac{d^{2} y}{d x^{2}}-\frac{d y}{d x}+\frac{5}{4} y=f(x)$

on the interval $[0, \pi / 2]$ with the boundary conditions $y(0)=0, y(\pi / 2)=0$ and an arbitrary bounded function $f(x)$. Use the Green's function to construct a solution $y(x)$ for the particular case $f(x)=e^{x / 2}$.

Paper 3, Section I, E

commentSuppose $(X, d)$ is a metric space. Do the following necessarily define a metric on $X$ ? Give proofs or counterexamples.

(i) $d_{1}(x, y)=k d(x, y)$ for some constant $k>0$, for all $x, y \in X$.

(ii) $d_{2}(x, y)=\min \{1, d(x, y)\}$ for all $x, y \in X$.

(iii) $d_{3}(x, y)=(d(x, y))^{2}$ for all $x, y \in X$.

Paper 3, Section II, C

commentA Runge-Kutta scheme is given by

$k_{1}=h f\left(y_{n}\right), \quad k_{2}=h f\left(y_{n}+\left[(1-a) k_{1}+a k_{2}\right]\right), \quad y_{n+1}=y_{n}+\frac{1}{2}\left(k_{1}+k_{2}\right)$

for the solution of an autonomous differential equation $y^{\prime}=f(y)$, where $a$ is a real parameter. What is the order of the scheme? Identify all values of $a$ for which the scheme is A-stable. Determine the linear stability domain for this range.

Paper 3, Section II, H

commentUse the two-phase simplex method to maximise $2 x_{1}+x_{2}+x_{3}$ subject to the constraints

$x_{1}+x_{2} \geqslant 1, \quad x_{1}+x_{2}+2 x_{3} \leqslant 4, \quad x_{i} \geqslant 0 \text { for } i=1,2,3$

Derive the dual of this linear programming problem and find the optimal solution of the dual.

Paper 3, Section I, A

commentThe wavefunction of a normalised Gaussian wavepacket for a particle of mass $m$ in one dimension with potential $V(x)=0$ is given by

$\psi(x, t)=B \sqrt{A(t)} \exp \left(\frac{-x^{2} A(t)}{2}\right)$

where $A(0)=1$. Given that $\psi(x, t)$ is a solution of the time-dependent Schrödinger equation, find the complex-valued function $A(t)$ and the real constant $B$.

[You may assume that $\int_{-\infty}^{\infty} e^{-\lambda x^{2}} d x=\sqrt{\pi} / \sqrt{\lambda} .$ ]

Paper 3, Section II, A

commentThe Hamiltonian of a two-dimensional isotropic harmonic oscillator is given by

$H=\frac{p_{x}^{2}+p_{y}^{2}}{2 m}+\frac{m \omega^{2}}{2}\left(x^{2}+y^{2}\right)$

where $x$ and $y$ denote position operators and $p_{x}$ and $p_{y}$ the corresponding momentum operators.

State without proof the commutation relations between the operators $x, y, p_{x}, p_{y}$. From these commutation relations, write $\left[x^{2}, p_{x}\right]$ and $\left[x, p_{x}^{2}\right]$ in terms of a single operator. Now consider the observable

$L=x p_{y}-y p_{x}$

Ehrenfest's theorem states that, for some observable $\mathrm{Q}$ with expectation value $\langle Q\rangle$,

$\frac{d\langle Q\rangle}{d t}=\frac{1}{i \hbar}\langle[Q, H]\rangle+\left\langle\frac{\partial Q}{\partial t}\right\rangle$

Use it to show that the expectation value of $L$ is constant with time.

Given two states

$\psi_{1}=\alpha x \exp \left(-\beta\left(x^{2}+y^{2}\right)\right) \text { and } \psi_{2}=\alpha y \exp \left(-\beta\left(x^{2}+y^{2}\right)\right)$

where $\alpha$ and $\beta$ are constants, find a normalised linear combination of $\psi_{1}$ and $\psi_{2}$ that is an eigenstate of $L$, and the corresponding $L$ eigenvalue. [You may assume that $\alpha$ correctly normalises both $\psi_{1}$ and $\psi_{2}$.] If a quantum state is prepared in the linear combination you have found at time $t=0$, what is the expectation value of $L$ at a later time $t ?$

Paper 3, Section II, H

commentSuppose that $X_{1}, \ldots, X_{n}$ are independent identically distributed random variables with

$\mathbb{P}\left(X_{i}=x\right)=\left(\begin{array}{c} k \\ x \end{array}\right) \theta^{x}(1-\theta)^{k-x}, \quad x=0, \ldots, k$

where $k$ is known and $\theta(0<\theta<1)$ is an unknown parameter. Find the maximum likelihood estimator $\hat{\theta}$ of $\theta$.

Statistician 1 has prior density for $\theta$ given by $\pi_{1}(\theta)=\alpha \theta^{\alpha-1}, 0<\theta<1$, where $\alpha>1$. Find the posterior distribution for $\theta$ after observing data $X_{1}=x_{1}, \ldots, X_{n}=x_{n}$. Write down the posterior mean $\hat{\theta}_{1}^{(B)}$, and show that

$\hat{\theta}_{1}^{(B)}=c \hat{\theta}+(1-c) \tilde{\theta}_{1}$

where $\tilde{\theta}_{1}$ depends only on the prior distribution and $c$ is a constant in $(0,1)$ that is to be specified.

Statistician 2 has prior density for $\theta$ given by $\pi_{2}(\theta)=\alpha(1-\theta)^{\alpha-1}, 0<\theta<1$. Briefly describe the prior beliefs that the two statisticians hold about $\theta$. Find the posterior mean $\hat{\theta}_{2}^{(B)}$ and show that $\hat{\theta}_{2}^{(B)}<\hat{\theta}_{1}^{(B)}$.

Suppose that $\alpha$ increases (but $n, k$ and the $x_{i}$ remain unchanged). How do the prior beliefs of the two statisticians change? How does $c$ vary? Explain briefly what happens to $\hat{\theta}_{1}^{(B)}$ and $\hat{\theta}_{2}^{(B)}$.

[Hint: The Beta $(\alpha, \beta)(\alpha>0, \beta>0)$ distribution has density

$f(x)=\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} x^{\alpha-1}(1-x)^{\beta-1}, \quad 0<x<1$

with expectation $\frac{\alpha}{\alpha+\beta}$ and variance $\frac{\alpha \beta}{(\alpha+\beta+1)(\alpha+\beta)^{2}}$. Here, $\Gamma(\alpha)=\int_{0}^{\infty} x^{\alpha-1} e^{-x} d x, \alpha>0$, is the Gamma function.]

Paper 3, Section I, $\mathbf{6 C}$

commentLet $f(x, y, z)=x z+y z$. Using Lagrange multipliers, find the location(s) and value of the maximum of $f$ on the intersection of the unit sphere $\left(x^{2}+y^{2}+z^{2}=1\right)$ and the ellipsoid given by $\frac{1}{4} x^{2}+\frac{1}{4} y^{2}+4 z^{2}=1$.