• # 2.I.3G

Consider a sequence of continuous functions $F_{n}:[-1,1] \rightarrow \mathbb{R}$. Suppose that the functions $F_{n}$ converge uniformly to some continuous function $F$. Show that the integrals $\int_{-1}^{1} F_{n}(x) d x$ converge to $\int_{-1}^{1} F(x) d x$.

Give an example to show that, even if the functions $F_{n}(x)$ and $F(x)$ are differentiable, the derivatives $F_{n}^{\prime}(0)$ need not converge to $F^{\prime}(0)$.

comment
• # 2.II.14G

Let $X$ be a non-empty complete metric space. Give an example to show that the intersection of a descending sequence of non-empty closed subsets of $X, A_{1} \supset A_{2} \supset \cdots$, can be empty. Show that if we also assume that

$\lim _{n \rightarrow \infty} \operatorname{diam}\left(A_{n}\right)=0$

then the intersection is not empty. Here the diameter $\operatorname{diam}(A)$ is defined as the supremum of the distances between any two points of a set $A$.

We say that a subset $A$ of $X$ is dense if it has nonempty intersection with every nonempty open subset of $X$. Let $U_{1}, U_{2}, \ldots$ be any sequence of dense open subsets of $X$. Show that the intersection $\bigcap_{n=1}^{\infty} U_{n}$ is not empty.

[Hint: Look for a descending sequence of subsets $A_{1} \supset A_{2} \supset \cdots$, with $A_{i} \subset U_{i}$, such that the previous part of this problem applies.]

comment

• # 2.I.5A

Let the functions $f$ and $g$ be analytic in an open, nonempty domain $\Omega$ and assume that $g \neq 0$ there. Prove that if $|f(z)| \equiv|g(z)|$ in $\Omega$ then there exists $\alpha \in \mathbb{R}$ such that $f(z) \equiv e^{i \alpha} g(z)$.

comment
• # 2.II.16A

Prove by using the Cauchy theorem that if $f$ is analytic in the open disc $\Omega=\{z \in \mathbb{C}:|z|<1\}$ then there exists a function $g$, analytic in $\Omega$, such that $g^{\prime}(z)=f(z)$, $z \in \Omega$.

comment

• # 2.I.7B

Write down the two Maxwell equations that govern steady magnetic fields. Show that the boundary conditions satisfied by the magnetic field on either side of a sheet carrying a surface current of density $\mathbf{s}$, with normal $\mathbf{n}$ to the sheet, are

$\mathbf{n} \times \mathbf{B}_{+}-\mathbf{n} \times \mathbf{B}_{-}=\mu_{0} \mathbf{s}$

Write down the force per unit area on the surface current.

comment
• # 2.II.18B

The vector potential due to a steady current density $\mathbf{J}$ is given by

$\mathbf{A}(\mathbf{r})=\frac{\mu_{0}}{4 \pi} \int \frac{\mathbf{J}\left(\mathbf{r}^{\prime}\right)}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|} d^{3} \mathbf{r}^{\prime}$

where you may assume that $\mathbf{J}$ extends only over a finite region of space. Use $(*)$ to derive the Biot-Savart law

$\mathbf{B}(\mathbf{r})=\frac{\mu_{0}}{4 \pi} \int \frac{\mathbf{J}\left(\mathbf{r}^{\prime}\right) \times\left(\mathbf{r}-\mathbf{r}^{\prime}\right)}{\left|\mathbf{r}-\mathbf{r}^{\prime}\right|^{3}} d^{3} \mathbf{r}^{\prime}$

A circular loop of wire of radius $a$ carries a current $I$. Take Cartesian coordinates with the origin at the centre of the loop and the $z$-axis normal to the loop. Use the BiotSavart law to show that on the $z$-axis the magnetic field is in the axial direction and of magnitude

$B=\frac{\mu_{0} I a^{2}}{2\left(z^{2}+a^{2}\right)^{3 / 2}}$

comment

• # 2.II.15E

(i) Let $X$ be the set of all infinite sequences $\left(\epsilon_{1}, \epsilon_{2}, \ldots\right)$ such that $\epsilon_{i} \in\{0,1\}$ for all $i$. Let $\tau$ be the collection of all subsets $Y \subset X$ such that, for every $\left(\epsilon_{1}, \epsilon_{2}, \ldots\right) \in Y$ there exists $n$ such that $\left(\eta_{1}, \eta_{2}, \ldots\right) \in Y$ whenever $\eta_{1}=\epsilon_{1}, \eta_{2}=\epsilon_{2}, \ldots, \eta_{n}=\epsilon_{n}$. Prove that $\tau$ is a topology on $X$.

(ii) Let a distance $d$ be defined on $X$ by

$d\left(\left(\epsilon_{1}, \epsilon_{2}, \ldots\right),\left(\eta_{1}, \eta_{2}, \ldots\right)\right)=\sum_{n=1}^{\infty} 2^{-n}\left|\epsilon_{n}-\eta_{n}\right|$

Prove that $d$ is a metric and that the topology arising from $d$ is the same as $\tau$.

comment

• # 2.I.2F

Prove that the alternating group $A_{5}$ is simple.

comment
• # 2.II.13F

Let $K$ be a subgroup of a group $G$. Prove that $K$ is normal if and only if there is a group $H$ and a homomorphism $\phi: G \rightarrow H$ such that

$K=\{g \in G: \phi(g)=1\}$

Let $G$ be the group of all $2 \times 2$ matrices $\left(\begin{array}{ll}a & b \\ c & d\end{array}\right)$ with $a, b, c, d$ in $\mathbb{Z}$ and $a d-b c=1$. Let $p$ be a prime number, and take $K$ to be the subset of $G$ consisting of all $\left(\begin{array}{ll}a & b \\ c & d\end{array}\right)$ with $a \equiv d \equiv 1(\bmod p)$ and $c \equiv b \equiv 0(\bmod p) .$ Prove that $K$ is a normal subgroup of $G .$

comment

• # 2.I.1E

For each $n$ let $A_{n}$ be the $n \times n$ matrix defined by

$\left(A_{n}\right)_{i j}= \begin{cases}i & i \leqslant j \\ j & i>j\end{cases}$

What is $\operatorname{det} A_{n} ?$ Justify your answer.

[It may be helpful to look at the cases $n=1,2,3$ before tackling the general case.]

comment
• # 2.II.12E

Let $Q$ be a quadratic form on a real vector space $V$ of dimension $n$. Prove that there is a basis $\mathbf{e}_{1}, \ldots, \mathbf{e}_{n}$ with respect to which $Q$ is given by the formula

$Q\left(\sum_{i=1}^{n} x_{i} \mathbf{e}_{i}\right)=x_{1}^{2}+\ldots+x_{p}^{2}-x_{p+1}^{2}-\ldots-x_{p+q}^{2}$

Prove that the numbers $p$ and $q$ are uniquely determined by the form $Q$. By means of an example, show that the subspaces $\left\langle\mathbf{e}_{1}, \ldots, \mathbf{e}_{p}\right\rangle$ and $\left\langle\mathbf{e}_{p+1}, \ldots, \mathbf{e}_{p+q}\right\rangle$ need not be uniquely determined by $Q$.

comment

• # 2.I.11H

Let $\left(X_{r}\right)_{r \geqslant 0}$ be an irreducible, positive-recurrent Markov chain on the state space $S$ with transition matrix $\left(P_{i j}\right)$ and initial distribution $P\left(X_{0}=i\right)=\pi_{i}, i \in S$, where $\left(\pi_{i}\right)$ is the unique invariant distribution. What does it mean to say that the Markov chain is reversible?

Prove that the Markov chain is reversible if and only if $\pi_{i} P_{i j}=\pi_{j} P_{j i}$ for all $i, j \in S$.

comment
• # 2.II.22H

Consider a Markov chain on the state space $S=\{0,1,2, \ldots\} \cup\left\{1^{\prime}, 2^{\prime}, 3^{\prime}, \ldots\right\}$ with transition probabilities as illustrated in the diagram below, where $0 and $p=1-q$.

For each value of $q, 0, determine whether the chain is transient, null recurrent or positive recurrent.

When the chain is positive recurrent, calculate the invariant distribution.

comment

• # 2.I.6B

Write down the general form of the solution in polar coordinates $(r, \theta)$ to Laplace's equation in two dimensions.

Solve Laplace's equation for $\phi(r, \theta)$ in $0 and in $1, subject to the conditions

$\begin{gathered} \phi \rightarrow 0 \quad \text { as } \quad r \rightarrow 0 \text { and } r \rightarrow \infty \\ \left.\phi\right|_{r=1+}=\left.\phi\right|_{r=1-} \quad \text { and }\left.\quad \frac{\partial \phi}{\partial r}\right|_{r=1+}-\left.\frac{\partial \phi}{\partial r}\right|_{r=1-}=\cos 2 \theta+\cos 4 \theta . \end{gathered}$

comment
• # 2.II.17B

Let $I_{i j}(P)$ be the moment-of-inertia tensor of a rigid body relative to the point $P$. If $G$ is the centre of mass of the body and the vector $G P$ has components $X_{i}$, show that

$I_{i j}(P)=I_{i j}(G)+M\left(X_{k} X_{k} \delta_{i j}-X_{i} X_{j}\right),$

where $M$ is the mass of the body.

Consider a cube of uniform density and side $2 a$, with centre at the origin. Find the inertia tensor about the centre of mass, and thence about the corner $P=(a, a, a)$.

Find the eigenvectors and eigenvalues of $I_{i j}(P)$.

comment

• # 2.I.9A

Determine the coefficients of Gaussian quadrature for the evaluation of the integral

$\int_{0}^{1} f(x) x d x$

that uses two function evaluations.

comment
• # 2.II.20A

Given an $m \times n$ matrix $A$ and $\mathbf{b} \in \mathbb{R}^{m}$, prove that the vector $\mathbf{x} \in \mathbb{R}^{n}$ is the solution of the least-squares problem for $A \mathbf{x} \approx \mathbf{b}$ if and only if $A^{T}(A \mathbf{x}-\mathbf{b})=\mathbf{0}$. Let

$A=\left[\begin{array}{cc} 1 & 2 \\ -3 & 1 \\ 1 & 3 \\ 4 & 1 \end{array}\right], \quad \mathbf{b}=\left[\begin{array}{c} 3 \\ 0 \\ -1 \\ 2 \end{array}\right]$

Determine the solution of the least-squares problem for $A \mathbf{x} \approx \mathbf{b}$.

comment

• # 2.I.8D

A quantum mechanical system is described by vectors $\psi=\left(\begin{array}{l}a \\ b\end{array}\right)$. The energy eigenvectors are

$\psi_{0}=\left(\begin{array}{c} \cos \theta \\ \sin \theta \end{array}\right), \quad \psi_{1}=\left(\begin{array}{c} -\sin \theta \\ \cos \theta \end{array}\right)$

with energies $E_{0}, E_{1}$ respectively. The system is in the state $\left(\begin{array}{l}1 \\ 0\end{array}\right)$ at time $t=0$. What is the probability of finding it in the state $\left(\begin{array}{l}0 \\ 1\end{array}\right)$ at a later time $t ?$

comment
• # 2.II.19D

Consider a Hamiltonian of the form

$H=\frac{1}{2 m}(p+i f(x))(p-i f(x)), \quad-\infty

where $f(x)$ is a real function. Show that this can be written in the form $H=p^{2} /(2 m)+V(x)$, for some real $V(x)$ to be determined. Show that there is a wave function $\psi_{0}(x)$, satisfying a first-order equation, such that $H \psi_{0}=0$. If $f$ is a polynomial of degree $n$, show that $n$ must be odd in order for $\psi_{0}$ to be normalisable. By considering $\int \mathrm{d} x \psi^{*} H \psi$ show that all energy eigenvalues other than that for $\psi_{0}$ must be positive.

For $f(x)=k x$, use these results to find the lowest energy and corresponding wave function for the harmonic oscillator Hamiltonian

$H_{\text {oscillator }}=\frac{p^{2}}{2 m}+\frac{1}{2} m \omega^{2} x^{2} .$

comment

• # 2.I.10H

A study of 60 men and 90 women classified each individual according to eye colour to produce the figures below.

\begin{tabular}{|c|c|c|c|} \cline { 2 - 4 } \multicolumn{1}{c|}{} & Blue & Brown & Green \ \hline Men & 20 & 20 & 20 \ \hline Women & 20 & 50 & 20 \ \hline \end{tabular}

Explain how you would analyse these results. You should indicate carefully any underlying assumptions that you are making.

A further study took 150 individuals and classified them both by eye colour and by whether they were left or right handed to produce the following table.

\begin{tabular}{|c|c|c|c|} \cline { 2 - 4 } \multicolumn{1}{c|}{} & Blue & Brown & Green \ \hline Left Handed & 20 & 20 & 20 \ \hline Right Handed & 20 & 50 & 20 \ \hline \end{tabular}

How would your analysis change? You should again set out your underlying assumptions carefully.

[You may wish to note the following percentiles of the $\chi^{2}$ distribution.

$\left.\begin{array}{ccccccc} & \chi_{1}^{2} & \chi_{2}^{2} & \chi_{3}^{2} & \chi_{4}^{2} & \chi_{5}^{2} & \chi_{6}^{2} \\ 95 \% \text { percentile } & 3.84 & 5.99 & 7.81 & 9.49 & 11.07 & 12.59 \\ 99 \% \text { percentile } & 6.64 & 9.21 & 11.34 & 13.28 & 15.09 & 16.81\end{array}\right]$

comment
• # 2.II.21H

Defining carefully the terminology that you use, state and prove the NeymanPearson Lemma.

Let $X$ be a single observation from the distribution with density function

$f(x \mid \theta)=\frac{1}{2} e^{-|x-\theta|}, \quad-\infty

for an unknown real parameter $\theta$. Find the best test of size $\alpha, 0<\alpha<1$, of the hypothesis $H_{0}: \theta=\theta_{0}$ against $H_{1}: \theta=\theta_{1}$, where $\theta_{1}>\theta_{0}$.

When $\alpha=0.05$, for which values of $\theta_{0}$ and $\theta_{1}$ will the power of the best test be at least $0.95$ ?

comment