• # 2.I.1A

State and prove the contraction mapping theorem.

Let $A=\{x, y, z\}$, let $d$ be the discrete metric on $A$, and let $d^{\prime}$ be the metric given by: $d^{\prime}$ is symmetric and

$\begin{gathered} d^{\prime}(x, y)=2, d^{\prime}(x, z)=2, d^{\prime}(y, z)=1 \\ d^{\prime}(x, x)=d^{\prime}(y, y)=d^{\prime}(z, z)=0 \end{gathered}$

Verify that $d^{\prime}$ is a metric, and that it is Lipschitz equivalent to $d$.

Define an appropriate function $f: A \rightarrow A$ such that $f$ is a contraction in the $d^{\prime}$ metric, but not in the $d$ metric.

comment
• # 2.II.10A

Define total boundedness for metric spaces.

Prove that a metric space has the Bolzano-Weierstrass property if and only if it is complete and totally bounded.

comment

• # 2.II.16E

Let $R$ be a rational function such that $\lim _{z \rightarrow \infty}\{z R(z)\}=0$. Assuming that $R$ has no real poles, use the residue calculus to evaluate

$\int_{-\infty}^{\infty} R(x) d x$

Given that $n \geqslant 1$ is an integer, evaluate

$\int_{0}^{\infty} \frac{d x}{1+x^{2 n}}$

comment

• # 2.I.4B

Define the terms connected and path connected for a topological space. If a topological space $X$ is path connected, prove that it is connected.

Consider the following subsets of $\mathbb{R}^{2}$ :

$\begin{gathered} I=\{(x, 0): 0 \leq x \leq 1\}, \quad A=\left\{(0, y): \frac{1}{2} \leq y \leq 1\right\}, \text { and } \\ J_{n}=\left\{\left(n^{-1}, y\right): 0 \leq y \leq 1\right\} \quad \text { for } n \geq 1 \end{gathered}$

Let

$X=A \cup I \cup \bigcup_{n \geq 1} J_{n}$

with the subspace (metric) topology. Prove that $X$ is connected.

[You may assume that any interval in $\mathbb{R}$ (with the usual topology) is connected.]

comment
• # 2.II.13A

State Liouville's Theorem. Prove it by considering

$\int_{|z|=R} \frac{f(z) d z}{(z-a)(z-b)}$

and letting $R \rightarrow \infty$.

Prove that, if $g(z)$ is a function analytic on all of $\mathbb{C}$ with real and imaginary parts $u(z)$ and $v(z)$, then either of the conditions:

$\text { (i) } u+v \geqslant 0 \text { for all } z \text {; or (ii) } u v \geqslant 0 \text { for all } z \text {, }$

implies that $g(z)$ is constant.

comment

• # 2.I $6 \mathrm{C} \quad$

Show that right multiplication by $A=\left(\begin{array}{ll}a & b \\ c & d\end{array}\right) \in M_{2 \times 2}(\mathbb{C})$ defines a linear transformation $\rho_{A}: M_{2 \times 2}(\mathbb{C}) \rightarrow M_{2 \times 2}(\mathbb{C})$. Find the matrix representing $\rho_{A}$ with respect to the basis

$\left(\begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array}\right),\left(\begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right),\left(\begin{array}{ll} 0 & 0 \\ 1 & 0 \end{array}\right),\left(\begin{array}{ll} 0 & 0 \\ 0 & 1 \end{array}\right)$

of $M_{2 \times 2}(\mathbb{C})$. Prove that the characteristic polynomial of $\rho_{A}$ is equal to the square of the characteristic polynomial of $A$, and that $A$ and $\rho_{A}$ have the same minimal polynomial.

comment
• # 2.II.15C

Define the dual $V^{*}$ of a vector space $V$. Given a basis $\left\{v_{1}, \ldots, v_{n}\right\}$ of $V$ define its dual and show it is a basis of $V^{*}$. For a linear transformation $\alpha: V \rightarrow W$ define the dual $\alpha^{*}: W^{*} \rightarrow V^{*}$.

Explain (with proof) how the matrix representing $\alpha: V \rightarrow W$ with respect to given bases of $V$ and $W$ relates to the matrix representing $\alpha^{*}: W^{*} \rightarrow V^{*}$ with respect to the corresponding dual bases of $V^{*}$ and $W^{*}$.

Prove that $\alpha$ and $\alpha^{*}$ have the same rank.

Suppose that $\alpha$ is an invertible endomorphism. Prove that $\left(\alpha^{*}\right)^{-1}=\left(\alpha^{-1}\right)^{*}$.

comment

• # 2.I.2G

Show that the symmetric and antisymmetric parts of a second-rank tensor are themselves tensors, and that the decomposition of a tensor into symmetric and antisymmetric parts is unique.

For the tensor $A$ having components

$A=\left(\begin{array}{lll} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 1 & 2 & 3 \end{array}\right)$

find the scalar $a$, vector $\mathbf{p}$ and symmetric traceless tensor $B$ such that

$A \mathbf{x}=a \mathbf{x}+\mathbf{p} \wedge \mathbf{x}+B \mathbf{x}$

for every vector $\mathbf{x}$.

comment
• # 2.II.11G

Explain what is meant by an isotropic tensor.

Show that the fourth-rank tensor

$A_{i j k l}=\alpha \delta_{i j} \delta_{k l}+\beta \delta_{i k} \delta_{j l}+\gamma \delta_{i l} \delta_{j k}$

is isotropic for arbitrary scalars $\alpha, \beta$ and $\gamma$.

Assuming that the most general isotropic tensor of rank 4 has the form $(*)$, or otherwise, evaluate

$B_{i j k l}=\int_{r

where $\mathbf{x}$ is the position vector and $r=|\mathbf{x}|$.

comment

• # 2.I.5E

Find an LU factorization of the matrix

$A=\left(\begin{array}{rrrr} 2 & -1 & 3 & 2 \\ -4 & 3 & -4 & -2 \\ 4 & -2 & 3 & 6 \\ -6 & 5 & -8 & 1 \end{array}\right)$

and use it to solve the linear system $A \mathbf{x}=\mathbf{b}$, where

$\mathbf{b}=\left(\begin{array}{r} -2 \\ 2 \\ 4 \\ 11 \end{array}\right)$

comment
• # 2.II.14E

(a) Let $B$ be an $n \times n$ positive-definite, symmetric matrix. Define the Cholesky factorization of $B$ and prove that it is unique.

(b) Let $A$ be an $m \times n$ matrix, $m \geqslant n$, such that $\operatorname{rank} A=n$. Prove the uniqueness of the "skinny QR factorization"

$A=Q R,$

where the matrix $Q$ is $m \times n$ with orthonormal columns, while $R$ is an $n \times n$ upper-triangular matrix with positive diagonal elements.

[Hint: Show that you may choose $R$ as a matrix that features in the Cholesky factorization of $B=A^{T} A$.]

comment

• # 2.I.8B

Let $V$ be a finite-dimensional vector space over a field $k$. Describe a bijective correspondence between the set of bilinear forms on $V$, and the set of linear maps of $V$ to its dual space $V^{*}$. If $\phi_{1}, \phi_{2}$ are non-degenerate bilinear forms on $V$, prove that there exists an isomorphism $\alpha: V \rightarrow V$ such that $\phi_{2}(u, v)=\phi_{1}(u, \alpha v)$ for all $u, v \in V$. If furthermore both $\phi_{1}, \phi_{2}$ are symmetric, show that $\alpha$ is self-adjoint (i.e. equals its adjoint) with respect to $\phi_{1}$.

comment
• # 2.II.17B

Suppose $p$ is an odd prime and $a$ an integer coprime to $p$. Define the Legendre symbol $\left(\frac{a}{p}\right)$, and state (without proof) Euler's criterion for its calculation.

For $j$ any positive integer, we denote by $r_{j}$ the (unique) integer with $\left|r_{j}\right| \leq(p-1) / 2$ and $r_{j} \equiv a j \bmod p$. Let $l$ be the number of integers $1 \leq j \leq(p-1) / 2$ for which $r_{j}$ is negative. Prove that

$\left(\frac{a}{p}\right)=(-1)^{l} .$

Hence determine the odd primes for which 2 is a quadratic residue.

Suppose that $p_{1}, \ldots, p_{m}$ are primes congruent to 7 modulo 8 , and let

$N=8\left(p_{1} \cdots p_{m}\right)^{2}-1$

Show that 2 is a quadratic residue for any prime dividing $N$. Prove that $N$ is divisible by some prime $p \equiv 7 \bmod 8$. Hence deduce that there are infinitely many primes congruent to 7 modulo 8 .

comment

• # 2.I $. 9 \mathrm{~F} \quad$

Consider a solution $\psi(x, t)$ of the time-dependent Schrödinger equation for a particle of mass $m$ in a potential $V(x)$. The expectation value of an operator $\mathcal{O}$ is defined as

$\langle\mathcal{O}\rangle=\int d x \psi^{*}(x, t) \mathcal{O} \psi(x, t)$

Show that

$\frac{d}{d t}\langle x\rangle=\frac{\langle p\rangle}{m},$

where

$p=\frac{\hbar}{i} \frac{\partial}{\partial x},$

and that

$\frac{d}{d t}\langle p\rangle=\left\langle-\frac{\partial V}{\partial x}(x)\right\rangle$

[You may assume that $\psi(x, t)$ vanishes as $x \rightarrow \pm \infty .]$

comment
• # 2.II.18F

(a) Write down the angular momentum operators $L_{1}, L_{2}, L_{3}$ in terms of $x_{i}$ and

$p_{i}=-i \hbar \frac{\partial}{\partial x_{i}}, i=1,2,3$

Verify the commutation relation

$\left[L_{1}, L_{2}\right]=i \hbar L_{3}$

Show that this result and its cyclic permutations imply

\begin{aligned} &{\left[L_{3}, L_{1} \pm i L_{2}\right]=\pm \hbar\left(L_{1} \pm i L_{2}\right)} \\ &{\left[\mathbf{L}^{2}, L_{1} \pm i L_{2}\right]=0} \end{aligned}

(b) Consider a wavefunction of the form $\psi=\left(x_{3}^{2}+a r^{2}\right) f(r)$, where $r^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}$. Show that for a particular value of $a, \psi$ is an eigenfunction of both $\mathbf{L}^{2}$ and $L_{3}$. What are the corresponding eigenvalues?

comment

• # 2.I.3D

Suppose the single random variable $X$ has a uniform distribution on the interval $[0, \theta]$ and it is required to estimate $\theta$ with the loss function

$L(\theta, a)=c(\theta-a)^{2}$

where $c>0$.

Find the posterior distribution for $\theta$ and the optimal Bayes point estimate with respect to the prior distribution with density $p(\theta)=\theta e^{-\theta}, \theta>0$.

comment
• # 2.II.12D

What is meant by a generalized likelihood ratio test? Explain in detail how to perform such a test

Let $X_{1}, \ldots, X_{n}$ be independent random variables, and let $X_{i}$ have a Poisson distribution with unknown mean $\lambda_{i}, i=1, \ldots, n$.

Find the form of the generalized likelihood ratio statistic for testing $H_{0}: \lambda_{1}=\ldots=\lambda_{n}$, and show that it may be approximated by

$\frac{1}{\bar{X}} \sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2},$

where $\bar{X}=n^{-1} \sum_{i=1}^{n} X_{i}$.

If, for $n=7$, you found that the value of this statistic was $27.3$, would you accept $H_{0}$ ? Justify your answer.

comment