• # 1.I.1B

Consider the cone $K$ in $\mathbb{R}^{3}$ defined by

$x_{3}^{2}=x_{1}^{2}+x_{2}^{2}, \quad x_{3}>0 .$

Find a unit normal $\mathbf{n}=\left(n_{1}, n_{2}, n_{3}\right)$ to $K$ at the point $\mathbf{x}=\left(x_{1}, x_{2}, x_{3}\right)$ such that $n_{3} \geqslant 0$.

Show that if $\mathbf{p}=\left(p_{1}, p_{2}, p_{3}\right)$ satisfies

$p_{3}^{2} \geqslant p_{1}^{2}+p_{2}^{2}$

and $p_{3} \geqslant 0$ then

$\mathbf{p} \cdot \mathbf{n} \geqslant 0$

comment
• # 1.I.2A

Express the unit vector $\mathbf{e}_{r}$ of spherical polar coordinates in terms of the orthonormal Cartesian basis vectors $\mathbf{i}, \mathbf{j}, \mathbf{k}$.

Express the equation for the paraboloid $z=x^{2}+y^{2}$ in (i) cylindrical polar coordinates $(\rho, \phi, z)$ and (ii) spherical polar coordinates $(r, \theta, \phi)$.

In spherical polar coordinates, a surface is defined by $r^{2} \cos 2 \theta=a$, where $a$ is a real non-zero constant. Find the corresponding equation for this surface in Cartesian coordinates and sketch the surfaces in the two cases $a>0$ and $a<0$.

comment
• # 1.II.5C

Prove the Cauchy-Schwarz inequality,

$|\mathbf{x} \cdot \mathbf{y}| \leqslant|\mathbf{x}||\mathbf{y}|$

for two vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^{n}$. Under what condition does equality hold?

Consider a pyramid in $\mathbb{R}^{n}$ with vertices at the origin $O$ and at $\mathbf{e}_{1}, \mathbf{e}_{2}, \ldots, \mathbf{e}_{n}$, where $\mathbf{e}_{1}=(1,0,0, \ldots), \mathbf{e}_{2}=(0,1,0, \ldots)$, and so on. The "base" of the pyramid is the $(n-1)$ dimensional object $B$ specified by $\left(\mathbf{e}_{1}+\mathbf{e}_{2}+\cdots+\mathbf{e}_{n}\right) \cdot \mathbf{x}=1, \mathbf{e}_{i} \cdot \mathbf{x} \geqslant 0$ for $i=1, \ldots, n$.

Find the point $C$ in $B$ equidistant from each vertex of $B$ and find the length of $O C .(C$ is the centroid of $B$.)

Show, using the Cauchy-Schwarz inequality, that this is the closest point in $B$ to the origin $O$.

Calculate the angle between $O C$ and any edge of the pyramid connected to $O$. What happens to this angle and to the length of $O C$ as $n$ tends to infinity?

comment
• # 1.II.6C

Given a vector $\mathbf{x}=\left(x_{1}, x_{2}\right) \in \mathbb{R}^{2}$, write down the vector $\mathbf{x}^{\prime}$ obtained by rotating $\mathbf{x}$ through an angle $\theta$.

Given a unit vector $\mathbf{n} \in \mathbb{R}^{3}$, any vector $\mathbf{x} \in \mathbb{R}^{3}$ may be written as $\mathbf{x}=\mathbf{x}_{\|}+\mathbf{x}_{\perp}$ where $\mathbf{x}_{\|}$is parallel to $\mathbf{n}$ and $\mathbf{x}_{\perp}$ is perpendicular to $\mathbf{n}$. Write down explicit formulae for $\mathbf{x}_{\|}$and $\mathbf{x}_{\perp}$, in terms of $\mathbf{n}$ and $\mathbf{x}$. Hence, or otherwise, show that the linear map

$\mathbf{x} \mapsto \mathbf{x}^{\prime}=(\mathbf{x} \cdot \mathbf{n}) \mathbf{n}+\cos \theta(\mathbf{x}-(\mathbf{x} \cdot \mathbf{n}) \mathbf{n})+\sin \theta(\mathbf{n} \times \mathbf{x})$

describes a rotation about $\mathbf{n}$ through an angle $\theta$, in the positive sense defined by the right hand rule.

Write equation $(*)$ in matrix form, $x_{i}^{\prime}=R_{i j} x_{j}$. Show that the trace $R_{i i}=1+2 \cos \theta$.

Given the rotation matrix

$R=\frac{1}{2}\left(\begin{array}{ccc} 1+r & 1-r & 1 \\ 1-r & 1+r & -1 \\ -1 & 1 & 2 r \end{array}\right)$

where $r=1 / \sqrt{2}$, find the two pairs $(\theta, \mathbf{n})$, with $-\pi \leqslant \theta<\pi$, giving rise to $R$. Explain why both represent the same rotation.

comment
• # 1.II.7B

(i) Let $\mathbf{u}, \mathbf{v}$ be unit vectors in $\mathbb{R}^{3}$. Write the transformation on vectors $\mathbf{x} \in \mathbb{R}^{3}$

$\mathbf{x} \mapsto(\mathbf{u} \cdot \mathbf{x}) \mathbf{u}+\mathbf{v} \times \mathbf{x}$

in matrix form as $\mathbf{x} \mapsto A \mathbf{x}$ for a matrix $A$. Find the eigenvalues in the two cases (a) when $\mathbf{u} \cdot \mathbf{v}=0$, and (b) when $\mathbf{u}, \mathbf{v}$ are parallel.

(ii) Let $\mathcal{M}$ be the set of $2 \times 2$ complex hermitian matrices with trace zero. Show that if $A \in \mathcal{M}$ there is a unique vector $\mathrm{x} \in \mathbb{R}^{3}$ such that

$A=\mathcal{R}(\mathbf{x})=\left(\begin{array}{cc} x_{3} & x_{1}-i x_{2} \\ x_{1}+i x_{2} & -x_{3} \end{array}\right)$

Show that if $U$ is a $2 \times 2$ unitary matrix, the transformation

$A \mapsto U^{-1} A U$

maps $\mathcal{M}$ to $\mathcal{M}$, and that if $U^{-1} \mathcal{R}(\mathbf{x}) U=\mathcal{R}(\mathbf{y})$, then $\|\mathbf{x}\|=\|\mathbf{y}\|$ where $\|\cdot\|$ means ordinary Euclidean length. [Hint: Consider determinants.]

comment
• # 1.II.8A

(i) State de Moivre's theorem. Use it to express $\cos 5 \theta$ as a polynomial in $\cos \theta$.

(ii) Find the two fixed points of the Möbius transformation

$z \longmapsto \omega=\frac{3 z+1}{z+3}$

that is, find the two values of $z$ for which $\omega=z$.

Given that $c \neq 0$ and $(a-d)^{2}+4 b c \neq 0$, show that a general Möbius transformation

$z \longmapsto \omega=\frac{a z+b}{c z+d}, \quad a d-b c \neq 0,$

has two fixed points $\alpha, \beta$ given by

$\alpha=\frac{a-d+m}{2 c}, \quad \beta=\frac{a-d-m}{2 c}$

where $\pm m$ are the square roots of $(a-d)^{2}+4 b c$.

Show that such a transformation can be expressed in the form

$\frac{\omega-\alpha}{\omega-\beta}=k \frac{z-\alpha}{z-\beta},$

where $k$ is a constant that you should determine.

comment
• # 3.I.1D

Give an example of a real $3 \times 3$ matrix $A$ with eigenvalues $-1,(1 \pm i) / \sqrt{2}$. Prove or give a counterexample to the following statements:

(i) any such $A$ is diagonalisable over $\mathbb{C}$;

(ii) any such $A$ is orthogonal;

(iii) any such $A$ is diagonalisable over $\mathbb{R}$.

comment
• # 3.I.2D

Show that if $H$ and $K$ are subgroups of a group $G$, then $H \cap K$ is also a subgroup of $G$. Show also that if $H$ and $K$ have orders $p$ and $q$ respectively, where $p$ and $q$ are coprime, then $H \cap K$ contains only the identity element of $G$. [You may use Lagrange's theorem provided it is clearly stated.]

comment
• # 3.II.5D

Let $G$ be a group and let $A$ be a non-empty subset of $G$. Show that

$C(A)=\{g \in G: g h=h g \quad \text { for all } h \in A\}$

is a subgroup of $G$.

Show that $\rho: G \times G \rightarrow G$ given by

$\rho(g, h)=g h g^{-1}$

defines an action of $G$ on itself.

Suppose $G$ is finite, let $O_{1}, \ldots, O_{n}$ be the orbits of the action $\rho$ and let $h_{i} \in O_{i}$ for $i=1, \ldots, n$. Using the Orbit-Stabilizer Theorem, or otherwise, show that

$|G|=|C(G)|+\sum_{i}|G| /\left|C\left(\left\{h_{i}\right\}\right)\right|$

where the sum runs over all values of $i$ such that $\left|O_{i}\right|>1$.

Let $G$ be a finite group of order $p^{r}$, where $p$ is a prime and $r$ is a positive integer. Show that $C(G)$ contains more than one element.

comment
• # 3.II.6D

Let $\theta: G \rightarrow H$ be a homomorphism between two groups $G$ and $H$. Show that the image of $\theta, \theta(G)$, is a subgroup of $H$; show also that the kernel of $\theta, \operatorname{ker}(\theta)$, is a normal subgroup of $G$.

Show that $G / \operatorname{ker}(\theta)$ is isomorphic to $\theta(G)$.

Let $O(3)$ be the group of $3 \times 3$ real orthogonal matrices and let $S O(3) \subset O(3)$ be the set of orthogonal matrices with determinant 1 . Show that $S O(3)$ is a normal subgroup of $O(3)$ and that $O(3) / S O(3)$ is isomorphic to the cyclic group of order $2 .$

Give an example of a homomorphism from $O(3)$ to $S O(3)$ with kernel of order $2 .$

comment
• # 3.II.7D

Let $S L(2, \mathbb{R})$ be the group of $2 \times 2$ real matrices with determinant 1 and let $\sigma: \mathbb{R} \rightarrow S L(2, \mathbb{R})$ be a homomorphism. On $K=\mathbb{R} \times \mathbb{R}^{2}$ consider the product

$(x, \mathbf{v}) *(y, \mathbf{w})=(x+y, \mathbf{v}+\sigma(x) \mathbf{w})$

Show that $K$ with this product is a group.

Find the homomorphism or homomorphisms $\sigma$ for which $K$ is a commutative group.

Show that the homomorphisms $\sigma$ for which the elements of the form $(0, \mathbf{v})$ with $\mathbf{v}=(a, 0), a \in \mathbb{R}$, commute with every element of $K$ are precisely those such that

$\sigma(x)=\left(\begin{array}{cc} 1 & r(x) \\ 0 & 1 \end{array}\right)$

with $r:(\mathbb{R},+) \rightarrow(\mathbb{R},+)$ an arbitrary homomorphism.

comment
• # 3.II.8D

Show that every Möbius transformation can be expressed as a composition of maps of the forms: $S_{1}(z)=z+\alpha, S_{2}(z)=\lambda z$ and $S_{3}(z)=1 / z$, where $\alpha, \lambda \in \mathbb{C}$.

Show that if $z_{1}, z_{2}, z_{3}$ and $w_{1}, w_{2}, w_{3}$ are two triples of distinct points in $\mathbb{C} \cup\{\infty\}$, there exists a unique Möbius transformation that takes $z_{j}$ to $w_{j}(j=1,2,3)$.

Let $G$ be the group of those Möbius transformations which map the set $\{0,1, \infty\}$ to itself. Find all the elements of $G$. To which standard group is $G$ isomorphic?

comment

• # 1.I.3F

Let $a_{n} \in \mathbb{R}$ for $n \geqslant 1$. What does it mean to say that the infinite series $\sum_{n} a_{n}$ converges to some value $A$ ? Let $s_{n}=a_{1}+\cdots+a_{n}$ for all $n \geqslant 1$. Show that if $\sum_{n} a_{n}$ converges to some value $A$, then the sequence whose $n$-th term is

$\left(s_{1}+\cdots+s_{n}\right) / n$

converges to some value $\tilde{A}$ as $n \rightarrow \infty$. Is it always true that $A=\tilde{A}$ ? Give an example where $\left(s_{1}+\cdots+s_{n}\right) / n$ converges but $\sum_{n} a_{n}$ does not.

comment
• # 1.I.4D

Let $\sum_{n=0}^{\infty} a_{n} z^{n}$ and $\sum_{n=0}^{\infty} b_{n} z^{n}$ be power series in the complex plane with radii of convergence $R$ and $S$ respectively. Show that if $R \neq S$ then $\sum_{n=0}^{\infty}\left(a_{n}+b_{n}\right) z^{n}$ has radius of convergence $\min (R, S)$. [Any results on absolute convergence that you use should be clearly stated.]

comment
• # 1.II.10E

Prove that if the function $f$ is infinitely differentiable on an interval $(r, s)$ containing $a$, then for any $x \in(r, s)$ and any positive integer $n$ we may expand $f(x)$ in the form

$f(a)+(x-a) f^{\prime}(a)+\frac{(x-a)^{2}}{2 !} f^{\prime \prime}(a)+\cdots+\frac{(x-a)^{n}}{n !} f^{(n)}(a)+R_{n}(f, a, x),$

where the remainder term $R_{n}(f, a, x)$ should be specified explicitly in terms of $f^{(n+1)}$.

Let $p(t)$ be a nonzero polynomial in $t$, and let $f$ be the real function defined by

$f(x)=p\left(\frac{1}{x}\right) \exp \left(-\frac{1}{x^{2}}\right) \quad(x \neq 0), \quad f(0)=0 .$

Show that $f$ is differentiable everywhere and that

$f^{\prime}(x)=q\left(\frac{1}{x}\right) \exp \left(-\frac{1}{x^{2}}\right) \quad(x \neq 0), \quad f^{\prime}(0)=0,$

where $q(t)=2 t^{3} p(t)-t^{2} p^{\prime}(t)$. Deduce that $f$ is infinitely differentiable, but that there exist arbitrarily small values of $x$ for which the remainder term $R_{n}(f, 0, x)$ in the Taylor expansion of $f$ about 0 does not tend to 0 as $n \rightarrow \infty$.

comment
• # 1.II.11F

Consider a sequence $\left(a_{n}\right)_{n \geqslant 1}$ of real numbers. What does it mean to say that $a_{n} \rightarrow$ $a \in \mathbb{R}$ as $n \rightarrow \infty$ ? What does it mean to say that $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$ ? What does it mean to say that $a_{n} \rightarrow-\infty$ as $n \rightarrow \infty$ ? Show that for every sequence of real numbers there exists a subsequence which converges to a value in $\mathbb{R} \cup\{\infty,-\infty\}$. [You may use the Bolzano-Weierstrass theorem provided it is clearly stated.]

Give an example of a bounded sequence $\left(a_{n}\right)_{n \geqslant 1}$ which is not convergent, but for which

$a_{n+1}-a_{n} \rightarrow 0 \quad \text { as } \quad n \rightarrow \infty$

comment
• # 1.II.12D

Let $f_{1}$ and $f_{2}$ be Riemann integrable functions on $[a, b]$. Show that $f_{1}+f_{2}$ is Riemann integrable.

Let $f$ be a Riemann integrable function on $[a, b]$ and set $f^{+}(x)=\max (f(x), 0)$. Show that $f^{+}$and $|f|$ are Riemann integrable.

Let $f$ be a function on $[a, b]$ such that $|f|$ is Riemann integrable. Is it true that $f$ is Riemann integrable? Justify your answer.

Show that if $f_{1}$ and $f_{2}$ are Riemann integrable on $[a, b]$, then so is $\max \left(f_{1}, f_{2}\right)$. Suppose now $f_{1}, f_{2}, \ldots$ is a sequence of Riemann integrable functions on $[a, b]$ and $f(x)=\sup _{n} f_{n}(x)$; is it true that $f$ is Riemann integrable? Justify your answer.

comment
• # 1.II.9E

State and prove the Intermediate Value Theorem.

Suppose that the function $f$ is differentiable everywhere in some open interval containing $[a, b]$, and that $f^{\prime}(a). By considering the functions $g$ and $h$ defined by

$g(x)=\frac{f(x)-f(a)}{x-a} \quad(a

and

$h(x)=\frac{f(b)-f(x)}{b-x} \quad(a \leqslant x

or otherwise, show that there is a subinterval $\left[a^{\prime}, b^{\prime}\right] \subseteq[a, b]$ such that

$\frac{f\left(b^{\prime}\right)-f\left(a^{\prime}\right)}{b^{\prime}-a^{\prime}}=k$

Deduce that there exists $c \in(a, b)$ with $f^{\prime}(c)=k$. [You may assume the Mean Value Theorem.]

comment

• # 2.I.1B

Solve the initial value problem

$\frac{d x}{d t}=x(1-x), \quad x(0)=x_{0},$

and sketch the phase portrait. Describe the behaviour as $t \rightarrow+\infty$ and as $t \rightarrow-\infty$ of solutions with initial value satisfying $0.

comment
• # 2.I.2B

Consider the first order system

$\frac{d \mathbf{x}}{d t}-A \mathbf{x}=e^{\lambda t} \mathbf{v}$

to be solved for $\mathbf{x}(t)=\left(x_{1}(t), x_{2}(t), \ldots, x_{n}(t)\right) \in \mathbb{R}^{n}$, where $A$ is an $n \times n$ matrix, $\lambda \in \mathbb{R}$ and $\mathbf{v} \in \mathbb{R}^{n}$. Show that if $\lambda$ is not an eigenvalue of $A$ there is a solution of the form $\mathbf{x}(t)=e^{\lambda t} \mathbf{u}$. For $n=2$, given

$A=\left(\begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right), \quad \lambda=1, \quad \text { and } \quad \mathbf{v}=\left(\begin{array}{l} 1 \\ 1 \end{array}\right)$

find this solution.

comment
• # 2.II.5B

Find the general solution of the system

\begin{aligned} &\frac{d x}{d t}=5 x+3 y+e^{2 t} \\ &\frac{d y}{d t}=2 x+2 e^{t} \\ &\frac{d z}{d t}=x+y+e^{t} \end{aligned}

comment
• # 2.II.6B

(i) Consider the equation

$\frac{\partial u}{\partial t}+\frac{\partial u}{\partial x}=\frac{\partial^{2} u}{\partial x^{2}}+f(t, x)$

and, using the change of variables $(t, x) \mapsto(s, y)=(t, x-t)$, show that it can be transformed into an equation of the form

$\frac{\partial U}{\partial s}=\frac{\partial^{2} U}{\partial y^{2}}+F(s, y)$

where $U(s, y)=u(s, y+s)$ and you should determine $F(s, y)$.

(ii) Let $H(y)$ be the Heaviside function. Find the general continuously differentiable solution of the equation

$w^{\prime \prime}(y)+H(y)=0$

(iii) Using (i) and (ii), find a continuously differentiable solution of

$\frac{\partial u}{\partial t}+\frac{\partial u}{\partial x}=\frac{\partial^{2} u}{\partial x^{2}}+H(x-t)$

such that $u(t, x) \rightarrow 0$ as $x \rightarrow-\infty$ and $u(t, x) \rightarrow-\infty$ as $x \rightarrow+\infty$

comment
• # 2.II.7B

Let $p, q$ be continuous functions and let $y_{1}(x)$ and $y_{2}(x)$ be, respectively, the solutions of the initial value problems

\begin{aligned} &y_{1}^{\prime \prime}+p(x) y_{1}^{\prime}+q(x) y_{1}=0, \quad y_{1}(0)=0, y_{1}^{\prime}(0)=1, \\ &y_{2}^{\prime \prime}+p(x) y_{2}^{\prime}+q(x) y_{2}=0, \quad y_{2}(0)=1, y_{2}^{\prime}(0)=0 . \end{aligned}

If $f$ is any continuous function show that the solution of

$y^{\prime \prime}+p(x) y^{\prime}+q(x) y=f(x), \quad y(0)=0, y^{\prime}(0)=0$

$y(x)=\int_{0}^{x} \frac{y_{1}(s) y_{2}(x)-y_{1}(x) y_{2}(s)}{W(s)} f(s) d s,$

where $W(x)=y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)$ is the Wronskian. Use this method to find $y=y(x)$ such that

$y^{\prime \prime}+y=\sin x, \quad y(0)=0, y^{\prime}(0)=0 .$

comment
• # 2.II.8B

Obtain a power series solution of the problem

$x y^{\prime \prime}+y=0, \quad y(0)=0, y^{\prime}(0)=1$

[You need not find the general power series solution.]

Let $y_{0}(x), y_{1}(x), y_{2}(x), \ldots$ be defined recursively as follows: $y_{0}(x)=x$. Given $y_{n-1}(x)$, define $y_{n}(x)$ to be the solution of

$x y_{n}^{\prime \prime}(x)=-y_{n-1}, \quad y_{n}(0)=0, y_{n}^{\prime}(0)=1$

By calculating $y_{1}, y_{2}, y_{3}$, or otherwise, obtain and prove a general formula for $y_{n}(x)$. Comment on the relation to the power series solution obtained previously.

comment

• # 4.I.3C

A car is at rest on a horizontal surface. The engine is switched on and suddenly sets the wheels spinning at a constant angular velocity $\Omega$. The wheels have radius $r$ and the coefficient of friction between the ground and the surface of the wheels is $\mu$. Calculate the time $T$ when the wheels start rolling without slipping. If the car is started on an upward slope in a similar manner, explain whether $T$ is increased or decreased relative to the case where the car starts on a horizontal surface.

comment
• # 4.I.4C

For the dynamical system

$\ddot{x}=-\sin x$

find the stable and unstable fixed points and the equation determining the separatrix. Sketch the phase diagram. If the system starts on the separatrix at $x=0$, write down an integral determining the time taken for the velocity $\dot{x}$ to reach zero. Show that the integral is infinite.

comment
• # 4.II.10C

A particle of mass $m$ bounces back and forth between two walls of mass $M$ moving towards each other in one dimension. The walls are separated by a distance $\ell(t)$. The wall on the left has velocity $+V(t)$ and the wall on the right has velocity $-V(t)$. The particle has speed $v(t)$. Friction is negligible and the particle-wall collisions are elastic.

Consider a collision between the particle and the wall on the right. Show that the centre-of-mass velocity of the particle-wall system is $v_{\mathrm{cm}}=(m v-M V) /(m+M)$. Calculate the particle's speed following the collision.

Assume that the particle is much lighter than the walls, i.e., $m \ll M$. Show that the particle's speed increases by approximately $2 V$ every time it collides with a wall.

Assume also that $v \gg V$ (so that particle-wall collisions are frequent) and that the velocities of the two walls remain nearly equal and opposite. Show that in a time interval $\Delta t$, over which the change in $V$ is negligible, the wall separation changes by $\Delta \ell \approx-2 V \Delta t$. Show that the number of particle-wall collisions during $\Delta t$ is approximately $v \Delta t / \ell$ and that the particle's speed increases by $\Delta v \approx-(\Delta \ell / \ell) v$ during this time interval.

Hence show that under the given conditions the particle speed $v$ is approximately proportional to $\ell^{-1}$.

comment
• # 4.II.11C

Two light, rigid rods of length $2 \ell$ have a mass $m$ attached to each end. Both are free to move in two dimensions. The two rods are placed so that their two ends are located at $(-d,+2 \ell),(-d, 0)$, and $(+d, 0),(+d,-2 \ell)$ respectively, where $d$ is positive. They are set in motion with no rotation, with centre-of-mass velocities $(+V, 0)$ and $(-V, 0)$, so that the lower mass on the first rod collides head on with the upper mass on the second rod at the origin $(0,0)$. [You may assume that the impulse is directed along the $x$-axis.]

Assuming the collision is elastic, calculate the centre of-mass velocity $\boldsymbol{v}$ and the angular velocity $\boldsymbol{\omega}$ of each rod immediately after the collision.

Assuming a coefficient of restitution $e$, compute $\boldsymbol{v}$ and $\boldsymbol{\omega}$ for each rod after the collision.

comment
• # 4.II.12C

A particle of mass $m$ and charge $q>0$ moves in a time-dependent magnetic field $\mathbf{B}=\left(0,0, B_{z}(t)\right)$.

Write down the equations of motion governing the particle's $x, y$ and $z$ coordinates.

Show that the speed of the particle in the $(x, y)$ plane, $V=\sqrt{\dot{x}^{2}+\dot{y}^{2}}$, is a constant.

Show that the general solution of the equations of motion is

\begin{aligned} &x(t)=x_{0}+V \int_{0}^{t} d t^{\prime} \cos \left(-\int_{0}^{t^{\prime}} d t^{\prime \prime} q \frac{B_{z}\left(t^{\prime \prime}\right)}{m}+\phi\right) \\ &y(t)=y_{0}+V \int_{0}^{t} d t^{\prime} \sin \left(-\int_{0}^{t^{\prime}} d t^{\prime \prime} q \frac{B_{z}\left(t^{\prime \prime}\right)}{m}+\phi\right) \\ &z(t)=z_{0}+v_{z} t \end{aligned}

and interpret each of the six constants of integration, $x_{0}, y_{0}, z_{0}, v_{z}, V$ and $\phi$. [Hint: Solve the equations for the particle's velocity in cylindrical polars.]

Let $B_{z}(t)=\beta t$, where $\beta$ is a positive constant. Assuming that $x_{0}=y_{0}=z_{0}=$ $v_{z}=\phi=0$ and $V=1$, calculate the position of the particle in the limit $t \rightarrow \infty$ (you may assume this limit exists). [Hint: You may use the results $\int_{0}^{\infty} d x \cos \left(x^{2}\right)=\int_{0}^{\infty} d x \sin \left(x^{2}\right)=$ $\sqrt{\pi / 8} .]$

comment
• # 4.II.9C

A motorcycle of mass $M$ moves on a bowl-shaped surface specified by its height $h(r)$ where $r=\sqrt{x^{2}+y^{2}}$ is the radius in cylindrical polar coordinates $(r, \phi, z)$. The torque exerted by the motorcycle engine on the rear wheel results in a force $\mathbf{F}(t)$ pushing the motorcycle forward. Assuming $\mathbf{F}(t)$ is directed along the motorcycle's velocity and that the motorcycle's vertical velocity and acceleration are small, show that the motion is described by

\begin{aligned} \ddot{r}-r \dot{\phi}^{2} &=-g \frac{d h}{d r}+\frac{F(t)}{M} \frac{\dot{r}}{\sqrt{\dot{r}^{2}+r^{2} \dot{\phi}^{2}}} \\ r \ddot{\phi}+2 \dot{r} \dot{\phi} &=\frac{F(t)}{M} \frac{r \dot{\phi}}{\sqrt{\dot{r}^{2}+r^{2} \dot{\phi}^{2}}} \end{aligned}

where dots denote time derivatives, $F(t)=|\mathbf{F}(t)|$ and $g$ is the acceleration due to gravity.

The motorcycle rider can adjust $F(t)$ to produce the desired trajectory. If the rider wants to move on a curve $r(\phi)$, show that $\phi(t)$ must obey

$\dot{\phi}^{2}=g \frac{d h}{d r} /\left(r+\frac{2}{r}\left(\frac{d r}{d \phi}\right)^{2}-\frac{d^{2} r}{d \phi^{2}}\right)$

Now assume that $h(r)=r^{2} / \ell$, with $\ell$ a constant, and $r(\phi)=\epsilon \phi$ with $\epsilon$ a positive constant, and $0 \leqslant \phi<\infty$ so that the desired trajectory is a spiral curve. Assuming that $\phi(t)$ tends to infinity as $t$ tends to infinity, show that $\dot{\phi}(t)$ tends to $\sqrt{2 g / \ell}$ and $F(t)$ tends to $4 \epsilon M g / \ell$ as $t$ tends to infinity.

comment

• # 4.I.1E

Explain what is meant by a prime number.

By considering numbers of the form $6 p_{1} p_{2} \cdots p_{n}-1$, show that there are infinitely many prime numbers of the form $6 k-1$.

By considering numbers of the form $\left(2 p_{1} p_{2} \cdots p_{n}\right)^{2}+3$, show that there are infinitely many prime numbers of the form $6 k+1$. [You may assume the result that, for a prime $p>3$, the congruence $x^{2} \equiv-3(\bmod p)$ is soluble only if $\left.p \equiv 1(\bmod 6) .\right]$

comment
• # 4.I.2E

Define the binomial coefficient $\left(\begin{array}{l}n \\ r\end{array}\right)$ and prove that

$\left(\begin{array}{c} n+1 \\ r \end{array}\right)=\left(\begin{array}{c} n \\ r \end{array}\right)+\left(\begin{array}{c} n \\ r-1 \end{array}\right) \quad \text { for } 0

Show also that if $p$ is prime then $\left(\begin{array}{l}p \\ r\end{array}\right)$ is divisible by $p$ for $0.

Deduce that if $0 \leqslant k and $0 \leqslant r \leqslant k$ then

$\left(\begin{array}{c} p+k \\ r \end{array}\right) \equiv\left(\begin{array}{c} k \\ r \end{array}\right) \quad(\bmod p) .$

comment
• # 4.II.5E

Explain what is meant by an equivalence relation on a set $A$.

If $R$ and $S$ are two equivalence relations on the same set $A$, we define

$R \circ S=\{(x, z) \in A \times A:$ there exists $y \in A$ such that $(x, y) \in R$ and $(y, z) \in S\} .$

Show that the following conditions are equivalent:

(i) $R \circ S$ is a symmetric relation on $A$;

(ii) $R \circ S$ is a transitive relation on $A$;

(iii) $S \circ R \subseteq R \circ S$;

(iv) $R \circ S$ is the unique smallest equivalence relation on $A$ containing both $R$ and $S$.

Show also that these conditions hold if $A=\mathbb{Z}$ and $R$ and $S$ are the relations of congruence modulo $m$ and modulo $n$, for some positive integers $m$ and $n$.

comment
• # 4.II.6E

State and prove the Inclusion-Exclusion Principle.

A permutation $\sigma$ of $\{1,2, \ldots, n\}$ is called a derangement if $\sigma(j) \neq j$ for every $j \leqslant n$. Use the Inclusion-Exclusion Principle to find a formula for the number $f(n)$ of derangements of $\{1,2, \ldots, n\}$. Show also that $f(n) / n$ ! converges to $1 / e$ as $n \rightarrow \infty$.

comment
• # 4.II.7E

State and prove Fermat's Little Theorem.

An odd number $n$ is called a Carmichael number if it is not prime, but every positive integer $a$ satisfies $a^{n} \equiv a(\bmod n)$. Show that a Carmichael number cannot be divisible by the square of a prime. Show also that a product of two distinct odd primes cannot be a Carmichael number, and that a product of three distinct odd primes $p, q, r$ is a Carmichael number if and only if $p-1$ divides $q r-1, q-1$ divides $p r-1$ and $r-1$ divides $p q-1$. Deduce that 1729 is a Carmichael number.

[You may assume the result that, for any prime $p$, there exists a number g prime to $p$ such that the congruence $g^{d} \equiv 1(\bmod p)$ holds only when $d$ is a multiple of $p-1$. The prime factors of 1729 are 7,13 and 19.]

comment
• # 4.II.8E

Explain what it means for a set to be countable. Prove that a countable union of countable sets is countable, and that the set of all subsets of $\mathbb{N}$ is uncountable.

A function $f: \mathbb{N} \rightarrow \mathbb{N}$ is said to be increasing if $f(m) \leqslant f(n)$ whenever $m \leqslant n$, and decreasing if $f(m) \geqslant f(n)$ whenever $m \leqslant n$. Show that the set of all increasing functions $\mathbb{N} \rightarrow \mathbb{N}$ is uncountable, but that the set of decreasing functions is countable.

[Standard results on countability, other than those you are asked to prove, may be assumed.]

comment

• # 2.I.3F

What is a convex function? State Jensen's inequality for a convex function of a random variable which takes finitely many values.

Let $p \geqslant 1$. By using Jensen's inequality, or otherwise, find the smallest constant $c_{p}$ so that

$(a+b)^{p} \leqslant c_{p}\left(a^{p}+b^{p}\right) \text { for all } a, b \geqslant 0 .$

[You may assume that $x \mapsto|x|^{p}$ is convex for $p \geqslant 1$.]

comment
• # 2.I.4F

Let $K$ be a fixed positive integer and $X$ a discrete random variable with values in $\{1,2, \ldots, K\}$. Define the probability generating function of $X$. Express the mean of $X$ in terms of its probability generating function. The Dirichlet probability generating function of $X$ is defined as

$q(z)=\sum_{n=1}^{K} \frac{1}{n^{z}} P(X=n)$

Express the mean of $X$ and the mean of $\log X$ in terms of $q(z)$.

comment
• # 2.II.10F

Let $X, Y$ be independent random variables with values in $(0, \infty)$ and the same probability density $\frac{2}{\sqrt{\pi}} e^{-x^{2}}$. Let $U=X^{2}+Y^{2}, V=Y / X$. Compute the joint probability density of $U, V$ and the marginal densities of $U$ and $V$ respectively. Are $U$ and $V$ independent?

comment
• # 2.II.11F

A normal deck of playing cards contains 52 cards, four each with face values in the set $\mathcal{F}=\{A, 2,3,4,5,6,7,8,9,10, J, Q, K\}$. Suppose the deck is well shuffled so that each arrangement is equally likely. Write down the probability that the top and bottom cards have the same face value.

Consider the following algorithm for shuffling:

S1: Permute the deck randomly so that each arrangement is equally likely.

S2: If the top and bottom cards do not have the same face value, toss a biased coin that comes up heads with probability $p$ and go back to step $\mathrm{S} 1$ if head turns up. Otherwise stop.

All coin tosses and all permutations are assumed to be independent. When the algorithm stops, let $X$ and $Y$ denote the respective face values of the top and bottom cards and compute the probability that $X=Y$. Write down the probability that $X=x$ for some $x \in \mathcal{F}$ and the probability that $Y=y$ for some $y \in \mathcal{F}$. What value of $p$ will make $X$ and $Y$ independent random variables? Justify your answer.

comment
• # 2.II.12F

Let $\gamma>0$ and define

$f(x)=\gamma \frac{1}{1+x^{2}}, \quad-\infty

Find $\gamma$ such that $f$ is a probability density function. Let $\left\{X_{i}: i \geqslant 1\right\}$ be a sequence of independent, identically distributed random variables, each having $f$ with the correct choice of $\gamma$ as probability density. Compute the probability density function of $X_{1}+\cdots+$ $X_{n}$. [You may use the identity

$m \int_{-\infty}^{\infty}\left\{\left(1+y^{2}\right)\left[m^{2}+(x-y)^{2}\right]\right\}^{-1} d y=\pi(m+1)\left\{(m+1)^{2}+x^{2}\right\}^{-1}$

valid for all $x \in \mathbb{R}$ and $m \in \mathbb{N}$.]

Deduce the probability density function of

$\frac{X_{1}+\cdots+X_{n}}{n}$

Explain why your result does not contradict the weak law of large numbers.

comment
• # 2.II.9F

Suppose that a population evolves in generations. Let $Z_{n}$ be the number of members in the $n$-th generation and $Z_{0} \equiv 1$. Each member of the $n$-th generation gives birth to a family, possibly empty, of members of the $(n+1)$-th generation; the size of this family is a random variable and we assume that the family sizes of all individuals form a collection of independent identically distributed random variables with the same generating function $G$.

Let $G_{n}$ be the generating function of $Z_{n}$. State and prove a formula for $G_{n}$ in terms of $G$. Use this to compute the variance of $Z_{n}$.

Now consider the total number of individuals in the first $n$ generations; this number is a random variable and we write $H_{n}$ for its generating function. Find a formula that expresses $H_{n+1}(s)$ in terms of $H_{n}(s), G(s)$ and $s$.

comment

• # 3.I.3A

Consider the vector field $\mathbf{F}(\mathbf{x})=\left(\left(3 x^{3}-x^{2}\right) y,\left(y^{3}-2 y^{2}+y\right) x, z^{2}-1\right)$ and let $S$ be the surface of a unit cube with one corner at $(0,0,0)$, another corner at $(1,1,1)$ and aligned with edges along the $x$-, $y$ - and $z$-axes. Use the divergence theorem to evaluate

$I=\int_{S} \mathbf{F} \cdot d \mathbf{S}$

Verify your result by calculating the integral directly.

comment
• # 3.I.4A

Use suffix notation in Cartesian coordinates to establish the following two identities for the vector field $\mathbf{v}$ :

$\nabla \cdot(\nabla \times \mathbf{v})=0, \quad(\mathbf{v} \cdot \nabla) \mathbf{v}=\nabla\left(\frac{1}{2}|\mathbf{v}|^{2}\right)-\mathbf{v} \times(\nabla \times \mathbf{v})$

comment
• # 3.II.10A

State Stokes' theorem for a vector field $\mathbf{A}$.

By applying Stokes' theorem to the vector field $\mathbf{A}=\phi \mathbf{k}$, where $\mathbf{k}$ is an arbitrary constant vector in $\mathbb{R}^{3}$ and $\phi$ is a scalar field defined on a surface $S$ bounded by a curve $\partial S$, show that

$\int_{S} d \mathbf{S} \times \nabla \phi=\int_{\partial S} \phi d \mathbf{x}$

For the vector field $\mathbf{A}=x^{2} y^{4}(1,1,1)$ in Cartesian coordinates, evaluate the line integral

$I=\int \mathbf{A} \cdot d \mathbf{x}$

around the boundary of the quadrant of the unit circle lying between the $x$ - and $y$ axes, that is, along the straight line from $(0,0,0)$ to $(1,0,0)$, then the circular arc $x^{2}+y^{2}=1, z=0$ from $(1,0,0)$ to $(0,1,0)$ and finally the straight line from $(0,1,0)$ back to $(0,0,0)$.

comment
• # 3.II.11A

In a region $R$ of $\mathbb{R}^{3}$ bounded by a closed surface $S$, suppose that $\phi_{1}$ and $\phi_{2}$ are both solutions of $\nabla^{2} \phi=0$, satisfying boundary conditions on $S$ given by $\phi=f$ on $S$, where $f$ is a given function. Prove that $\phi_{1}=\phi_{2}$.

In $\mathbb{R}^{2}$ show that

$\phi(x, y)=\left(a_{1} \cosh \lambda x+a_{2} \sinh \lambda x\right)\left(b_{1} \cos \lambda y+b_{2} \sin \lambda y\right)$

is a solution of $\nabla^{2} \phi=0$, for any constants $a_{1}, a_{2}, b_{1}, b_{2}$ and $\lambda$. Hence, or otherwise, find a solution $\phi(x, y)$ in the region $x \geqslant 0$ and $0 \leqslant y \leqslant a$ which satisfies:

\begin{aligned} &\phi(x, 0)=0, \quad \phi(x, a)=0, \quad x \geqslant 0 \\ &\phi(0, y)=\sin \frac{n \pi y}{a}, \quad \phi(x, y) \rightarrow 0 \quad \text { as } \quad x \rightarrow \infty, \quad 0 \leqslant y \leqslant a \end{aligned}

where $a$ is a real constant and $n$ is an integer.

comment
• # 3.II.12A

Define what is meant by an isotropic tensor. By considering a rotation of a second rank isotropic tensor $B_{i j}$ by $90^{\circ}$ about the $z$-axis, show that its components must satisfy $B_{11}=B_{22}$ and $B_{13}=B_{31}=B_{23}=B_{32}=0$. Now consider a second and different rotation to show that $B_{i j}$ must be a multiple of the Kronecker delta, $\delta_{i j}$.

Suppose that a homogeneous but anisotropic crystal has the conductivity tensor

$\sigma_{i j}=\alpha \delta_{i j}+\gamma n_{i} n_{j}$

where $\alpha, \gamma$ are real constants and the $n_{i}$ are the components of a constant unit vector $\mathbf{n}$ $(\mathbf{n} \cdot \mathbf{n}=1)$. The electric current density $\mathbf{J}$ is then given in components by

$J_{i}=\sigma_{i j} E_{j}$

where $E_{j}$ are the components of the electric field $\mathbf{E}$. Show that

(i) if $\alpha \neq 0$ and $\gamma \neq 0$, then there is a plane such that if $\mathbf{E}$ lies in this plane, then $\mathbf{E}$ and $\mathbf{J}$ must be parallel, and

(ii) if $\gamma \neq-\alpha$ and $\alpha \neq 0$, then $\mathbf{E} \neq 0$ implies $\mathbf{J} \neq 0$.

If $D_{i j}=\epsilon_{i j k} n_{k}$, find the value of $\gamma$ such that

$\sigma_{i j} D_{j k} D_{k m}=-\sigma_{i m}$

comment
• # 3.II.9A

Evaluate the line integral

$\int \alpha\left(x^{2}+x y\right) d x+\beta\left(x^{2}+y^{2}\right) d y$

with $\alpha$ and $\beta$ constants, along each of the following paths between the points $A=(1,0)$ and $B=(0,1)$ :

(i) the straight line between $A$ and $B$;

(ii) the $x$-axis from $A$ to the origin $(0,0)$ followed by the $y$-axis to $B$;

(iii) anti-clockwise from $A$ to $B$ around the circular path centred at the origin $(0,0)$.

You should obtain the same answer for the three paths when $\alpha=2 \beta$. Show that when $\alpha=2 \beta$, the integral takes the same value along any path between $A$ and $B$.

comment