• # 3.I.3B

Let $f: \mathbf{R}^{2} \rightarrow \mathbf{R}$ be a function. What does it mean to say that $f$ is differentiable at a point $(a, b)$ in $\mathbf{R}^{2}$ ? Show that if $f$ is differentiable at $(a, b)$, then $f$ is continuous at $(a, b)$.

For each of the following functions, determine whether or not it is differentiable at $(0,0)$. Justify your answers.

(i)

$f(x, y)= \begin{cases}x^{2} y^{2}\left(x^{2}+y^{2}\right)^{-1} & \text { if }(x, y) \neq(0,0) \\ 0 & \text { if }(x, y)=(0,0)\end{cases}$

(ii)

$f(x, y)= \begin{cases}x^{2}\left(x^{2}+y^{2}\right)^{-1} & \text { if }(x, y) \neq(0,0) \\ 0 & \text { if }(x, y)=(0,0)\end{cases}$

comment
• # 3.II.13B

Let $f$ be a real-valued differentiable function on an open subset $U$ of $\mathbf{R}^{n}$. Assume that $0 \notin U$ and that for all $x \in U$ and $\lambda>0, \lambda x$ is also in $U$. Suppose that $f$ is homogeneous of degree $c \in \mathbf{R}$, in the sense that $f(\lambda x)=\lambda^{c} f(x)$ for all $x \in U$ and $\lambda>0$. By means of the Chain Rule or otherwise, show that

$\left.D f\right|_{x}(x)=c f(x)$

for all $x \in U$. (Here $\left.D f\right|_{x}$ denotes the derivative of $f$ at $x$, viewed as a linear map $\mathbf{R}^{n} \rightarrow \mathbf{R}$.)

Conversely, show that any differentiable function $f$ on $U$ with $\left.D f\right|_{x}(x)=c f(x)$ for all $x \in U$ must be homogeneous of degree $c$.

comment

• # 3.II.14A

State the Cauchy integral formula, and use it to deduce Liouville's theorem.

Let $f$ be a meromorphic function on the complex plane such that $\left|f(z) / z^{n}\right|$ is bounded outside some disc (for some fixed integer $n$ ). By considering Laurent expansions, or otherwise, show that $f$ is a rational function in $z$.

comment

• # 3.I.5F

Define a harmonic function and state when the harmonic functions $f$ and $g$ are conjugate

Let $\{u, v\}$ and $\{p, q\}$ be two pairs of harmonic conjugate functions. Prove that $\{p(u, v), q(u, v)\}$ are also harmonic conjugate.

comment

• # 3.II.17H

If $\mathbf{E}(\mathbf{x}, t), \mathbf{B}(\mathbf{x}, t)$ are solutions of Maxwell's equations in a region without any charges or currents show that $\mathbf{E}^{\prime}(\mathbf{x}, t)=c \mathbf{B}(\mathbf{x}, t), \mathbf{B}^{\prime}(\mathbf{x}, t)=-\mathbf{E}(\mathbf{x}, t) / c$ are also solutions.

At the boundary of a perfect conductor with normal $\mathbf{n}$ briefly explain why

$\mathbf{n} \cdot \mathbf{B}=0, \quad \mathbf{n} \times \mathbf{E}=0$

Electromagnetic waves inside a perfectly conducting tube with axis along the $z$-axis are given by the real parts of complex solutions of Maxwell's equations of the form

$\mathbf{E}(\mathbf{x}, t)=\mathbf{e}(x, y) e^{i(k z-\omega t)}, \quad \mathbf{B}(\mathbf{x}, t)=\mathbf{b}(x, y) e^{i(k z-\omega t)} .$

Suppose $b_{z}=0$. Show that we can find a solution in this case in terms of a function $\psi(x, y)$ where

$\left(e_{x}, e_{y}\right)=\left(\frac{\partial}{\partial x} \psi, \frac{\partial}{\partial y} \psi\right), \quad e_{z}=i\left(k-\frac{\omega^{2}}{k c^{2}}\right) \psi,$

so long as $\psi$ satisfies

$\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}+\gamma^{2}\right) \psi=0$

for suitable $\gamma$. Show that the boundary conditions are satisfied if $\psi=0$ on the surface of the tube.

Obtain a similar solution with $e_{z}=0$ but show that the boundary conditions are now satisfied if the normal derivative $\partial \psi / \partial n=0$ on the surface of the tube.

comment

• # 3.II.18E

Consider the velocity potential in plane polar coordinates

$\phi(r, \theta)=U\left(r+\frac{a^{2}}{r}\right) \cos \theta+\frac{\kappa \theta}{2 \pi}$

Find the velocity field and show that it corresponds to flow past a cylinder $r=a$ with circulation $\kappa$ and uniform flow $U$ at large distances.

Find the distribution of pressure $p$ over the surface of the cylinder. Hence find the $x$ and $y$ components of the force on the cylinder

$\left(F_{x}, F_{y}\right)=\int(\cos \theta, \sin \theta) p a d \theta .$

comment

• # 3.I.2A

Write down the Riemannian metric on the disc model $\Delta$ of the hyperbolic plane. Given that the length minimizing curves passing through the origin correspond to diameters, show that the hyperbolic circle of radius $\rho$ centred on the origin is just the Euclidean circle centred on the origin with Euclidean $\operatorname{radius~} \tanh (\rho / 2)$. Prove that the hyperbolic area is $2 \pi(\cosh \rho-1)$.

State the Gauss-Bonnet theorem for the area of a hyperbolic triangle. Given a hyperbolic triangle and an interior point $P$, show that the distance from $P$ to the nearest side is at most $\cosh ^{-1}(3 / 2)$.

comment
• # 3.II.12A

Describe geometrically the stereographic projection map $\pi$ from the unit sphere $S^{2}$ to the extended complex plane $\mathbf{C}_{\infty}=\mathbf{C} \cup\{\infty\}$, positioned equatorially, and find a formula for $\pi$.

Show that any Möbius transformation $T \neq 1$ on $\mathbf{C}_{\infty}$ has one or two fixed points. Show that the Möbius transformation corresponding (under the stereographic projection map) to a rotation of $S^{2}$ through a non-zero angle has exactly two fixed points $z_{1}$ and $z_{2}$, where $z_{2}=-1 / \bar{z}_{1}$. If now $T$ is a Möbius transformation with two fixed points $z_{1}$ and $z_{2}$ satisfying $z_{2}=-1 / \bar{z}_{1}$, prove that either $T$ corresponds to a rotation of $S^{2}$, or one of the fixed points, say $z_{1}$, is an attractive fixed point, i.e. for $z \neq z_{2}, T^{n} z \rightarrow z_{1}$ as $n \rightarrow \infty$.

[You may assume the fact that any rotation of $S^{2}$ corresponds to some Möbius transformation of $\mathbf{C}_{\infty}$ under the stereographic projection map.]

comment

• # 3.I.1C

Define what is meant by two elements of a group $G$ being conjugate, and prove that this defines an equivalence relation on $G$. If $G$ is finite, sketch the proof that the cardinality of each conjugacy class divides the order of $G$.

comment

• # 3.II.10B

Let $S$ be the vector space of functions $f: \mathbf{R} \rightarrow \mathbf{R}$ such that the $n$th derivative of $f$ is defined and continuous for every $n \geqslant 0$. Define linear maps $A, B: S \rightarrow S$ by $A(f)=d f / d x$ and $B(f)(x)=x f(x)$. Show that

$[A, B]=1_{S},$

where in this question $[A, B]$ means $A B-B A$ and $1_{S}$ is the identity map on $S$.

Now let $V$ be any real vector space with linear maps $A, B: V \rightarrow V$ such that $[A, B]=1_{V}$. Suppose that there is a nonzero element $y \in V$ with $A y=0$. Let $W$ be the subspace of $V$ spanned by $y, B y, B^{2} y$, and so on. Show that $A(B y)$ is in $W$ and give a formula for it. More generally, show that $A\left(B^{i} y\right)$ is in $W$ for each $i \geqslant 0$, and give a formula for it.

Show, using your formula or otherwise, that $\left\{y, B y, B^{2} y, \ldots\right\}$ are linearly independent. (Or, equivalently: show that $y, B y, B^{2} y, \ldots, B^{n} y$ are linearly independent for every $n \geqslant 0$.)

comment

• # 3.I.9D

Prove that if two states of a Markov chain communicate then they have the same period.

Consider a Markov chain with state space $\{1,2, \ldots, 7\}$ and transition probabilities determined by the matrix

$\left(\begin{array}{ccccccc} 0 & \frac{1}{4} & \frac{1}{4} & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & \frac{1}{3} & 0 & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{2} & 0 & 0 & 0 & 0 & \frac{1}{2} & 0 \\ \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & \frac{1}{6} & 0 & \frac{1}{6} & \frac{1}{6} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{array}\right)$

Identify the communicating classes of the chain and for each class state whether it is open or closed and determine its period.

comment

• # 3.I.6E

Describe briefly the method of Lagrangian multipliers for finding the stationary points of a function $f(x, y)$ subject to a constraint $g(x, y)=0$.

Use the method to find the stationary values of $x y$ subject to the constraint $\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1 .$

comment
• # 3.II.15H

Obtain the power series solution about $t=0$ of

$\left(1-t^{2}\right) \frac{\mathrm{d}^{2}}{\mathrm{~d} t^{2}} y-2 t \frac{\mathrm{d}}{\mathrm{d} t} y+\lambda y=0$

and show that regular solutions $y(t)=P_{n}(t)$, which are polynomials of degree $n$, are obtained only if $\lambda=n(n+1), n=0,1,2, \ldots$ Show that the polynomial must be even or odd according to the value of $n$.

Show that

$\int_{-1}^{1} P_{n}(t) P_{m}(t) \mathrm{d} t=k_{n} \delta_{n m}$

for some $k_{n}>0$.

Using the identity

$\left(x \frac{\partial^{2}}{\partial x^{2}} x+\frac{\partial}{\partial t}\left(1-t^{2}\right) \frac{\partial}{\partial t}\right) \frac{1}{\left(1-2 x t+x^{2}\right)^{\frac{1}{2}}}=0,$

and considering an expansion $\sum_{n} a_{n}(x) P_{n}(t)$ show that

$\frac{1}{\left(1-2 x t+x^{2}\right)^{\frac{1}{2}}}=\sum_{n=0}^{\infty} x^{n} P_{n}(t), \quad 0

if we assume $P_{n}(1)=1$.

By considering

$\int_{-1}^{1} \frac{1}{1-2 x t+x^{2}} d t$

determine the coefficient $k_{n}$.

comment

• # 3.I.4A

Show that a topology $\tau_{1}$ is determined on the real line $\mathbf{R}$ by specifying that a nonempty subset is open if and only if it is a union of half-open intervals $\{a \leq x, where $a are real numbers. Determine whether $\left(\mathbf{R}, \tau_{1}\right)$ is Hausdorff.

Let $\tau_{2}$ denote the cofinite topology on $\mathbf{R}$ (that is, a non-empty subset is open if and only if its complement is finite). Prove that the identity map induces a continuous $\operatorname{map}\left(\mathbf{R}, \tau_{1}\right) \rightarrow\left(\mathbf{R}, \tau_{2}\right)$.

comment

• # 3.II.19F

Given real $\mu \neq 0$, we consider the matrix

$A=\left[\begin{array}{cccc} \frac{1}{\mu} & 1 & 0 & 0 \\ -1 & \frac{1}{\mu} & 1 & 0 \\ 0 & -1 & \frac{1}{\mu} & 1 \\ 0 & 0 & -1 & \frac{1}{\mu} \end{array}\right]$

Construct the Jacobi and Gauss-Seidel iteration matrices originating in the solution of the linear system $A x=b$.

Determine the range of real $\mu \neq 0$ for which each iterative procedure converges.

comment

• # 3.II.20D

Consider the linear programming problem

\begin{aligned} \operatorname{maximize} \quad 4 x_{1}+x_{2}-9 x_{3} & \\ \text { subject to } \quad x_{2}-11 x_{3} & \leqslant 11 \\ -3 x_{1}+2 x_{2}-7 x_{3} & \leqslant 16 \\ 9 x_{1}-2 x_{2}+10 x_{3} & \leqslant 29, \quad x_{i} \geqslant 0, \quad i=1,2,3 . \end{aligned}

(a) After adding slack variables $z_{1}, z_{2}$ and $z_{3}$ and performing one pivot in the simplex algorithm the following tableau is obtained:

\begin{tabular}{c|rrrrrr|r} & $x_{1}$ & $x_{2}$ & $x_{3}$ & $z_{1}$ & $z_{2}$ & $z_{3}$ & \ \hline$z_{1}$ & 0 & 1 & $-11$ & 1 & 0 & 0 & 11 \ $z_{2}$ & 0 & $\frac{4}{3}$ & $-\frac{11}{3}$ & 0 & 1 & $\frac{1}{3}$ & $\frac{77}{3}$ \ $x_{1}$ & 1 & $-\frac{2}{9}$ & $\frac{10}{9}$ & 0 & 0 & $\frac{1}{9}$ & $\frac{29}{9}$ \ \hline Payoff & 0 & $\frac{17}{9}$ & $-\frac{121}{9}$ & 0 & 0 & $-\frac{4}{9}$ & $-\frac{116}{9}$ \end{tabular}

Complete the solution of the problem using the simplex algorithm.

(b) Obtain the dual problem and identify its optimal solution from the optimal tableau in (a).

(c) Suppose that the right-hand sides in the constraints to the original problem are changed from $(11,16,29)$ to $\left(11+\epsilon_{1}, 16+\epsilon_{2}, 29+\epsilon_{3}\right)$. Give necessary and sufficient conditions on $\left(\epsilon_{1}, \epsilon_{2}, \epsilon_{3}\right)$ which ensure that the optimal solution to the dual obtained in (b) remains optimal for the dual for the amended problem.

comment

• # 3.I.7G

The wave function $\Psi(x, t)$ is a solution of the time-dependent Schrödinger equation for a particle of mass $m$ in a potential $V(x)$,

$H \Psi(x, t)=i \hbar \frac{\partial}{\partial t} \Psi(x, t),$

where $H$ is the Hamiltonian. Define the expectation value, $\langle\mathcal{O}\rangle$, of any operator $\mathcal{O}$.

At time $t=0, \Psi(x, t)$ can be written as a sum of the form

$\Psi(x, 0)=\sum_{n} a_{n} u_{n}(x),$

where $u_{n}$ is a complete set of normalized eigenfunctions of the Hamiltonian with energy eigenvalues $E_{n}$ and $a_{n}$ are complex coefficients that satisfy $\sum_{n} a_{n}^{*} a_{n}=1$. Find $\Psi(x, t)$ for $t>0$. What is the probability of finding the system in a state with energy $E_{p}$ at time $t$ ?

Show that the expectation value of the energy is independent of time.

comment
• # 3.II.16G

A particle of mass $\mu$ moves in two dimensions in an axisymmetric potential. Show that the time-independent Schrödinger equation can be separated in polar coordinates. Show that the angular part of the wave function has the form $e^{i m \phi}$, where $\phi$ is the angular coordinate and $m$ is an integer. Suppose that the potential is zero for $r, where $r$ is the radial coordinate, and infinite otherwise. Show that the radial part of the wave function satisfies

$\frac{d^{2} R}{d \rho^{2}}+\frac{1}{\rho} \frac{d R}{d \rho}+\left(1-\frac{m^{2}}{\rho^{2}}\right) R=0$

where $\rho=r\left(2 \mu E / \hbar^{2}\right)^{1 / 2}$. What conditions must $R$ satisfy at $r=0$ and $R=a$ ?

Show that, when $m=0$, the equation has the solution $R(\rho)=\sum_{k=0}^{\infty} A_{k} \rho^{k}$, where $A_{k}=0$ if $k$ is odd and

$A_{k+2}=-\frac{A_{k}}{(k+2)^{2}}$

if $k$ is even

Deduce the coefficients $A_{2}$ and $A_{4}$ in terms of $A_{0}$. By truncating the series expansion at order $\rho^{4}$, estimate the smallest value of $\rho$ at which the $R$ is zero. Hence give an estimate of the ground state energy.

[You may use the fact that the Laplace operator is given in polar coordinates by the expression

$\left.\nabla^{2}=\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r} \frac{\partial}{\partial r}+\frac{1}{r^{2}} \frac{\partial^{2}}{\partial \phi^{2}}\right]$

comment

• # 3.I.8D

Let $X_{1}, \ldots, X_{n}$ be a random sample from a normal distribution with mean $\mu$ and variance $\sigma^{2}$, where $\mu$ and $\sigma^{2}$ are unknown. Derive the form of the size- $\alpha$ generalized likelihood-ratio test of the hypothesis $H_{0}: \mu=\mu_{0}$ against $H_{1}: \mu \neq \mu_{0}$, and show that it is equivalent to the standard $t$-test of size $\alpha$.

[You should state, but need not derive, the distribution of the test statistic.]

comment