Part IA, 2002

# Part IA, 2002

### Jump to course

1.I.1B

comment(a) State the Orbit-Stabilizer Theorem for a finite group $G$ acting on a set $X$.

(b) Suppose that $G$ is the group of rotational symmetries of a cube $C$. Two regular tetrahedra $T$ and $T^{\prime}$ are inscribed in $C$, each using half the vertices of $C$. What is the order of the stabilizer in $G$ of $T$ ?

1.I.2D

commentState the Fundamental Theorem of Algebra. Define the characteristic equation for an arbitrary $3 \times 3$ matrix $A$ whose entries are complex numbers. Explain why the matrix must have three eigenvalues, not necessarily distinct.

Find the characteristic equation of the matrix

$A=\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 0 & i \\ 0 & -i & 0 \end{array}\right)$

and hence find the three eigenvalues of $A$. Find a set of linearly independent eigenvectors, specifying which eigenvector belongs to which eigenvalue.

1.II.5B

comment(a) Find a subset $T$ of the Euclidean plane $\mathbb{R}^{2}$ that is not fixed by any isometry (rigid motion) except the identity.

Let $G$ be a subgroup of the group of isometries of $\mathbb{R}^{2}, T$ a subset of $\mathbb{R}^{2}$ not fixed by any isometry except the identity, and let $S$ denote the union $\bigcup_{g \in G} g(T)$. Does the group $H$ of isometries of $S$ contain $G$ ? Justify your answer.

(b) Find an example of such a $G$ and $T$ with $H \neq G$.

1.II.6B

comment(a) Suppose that $g$ is a Möbius transformation, acting on the extended complex plane. What are the possible numbers of fixed points that $g$ can have? Justify your answer.

(b) Show that the operation $c$ of complex conjugation, defined by $c(z)=\bar{z}$, is not a Möbius transformation.

1.II.7B

comment(a) Find, with justification, the matrix, with respect to the standard basis of $\mathbb{R}^{2}$, of the rotation through an angle $\alpha$ about the origin.

(b) Find the matrix, with respect to the standard basis of $\mathbb{R}^{3}$, of the rotation through an angle $\alpha$ about the axis containing the point $\left(\frac{3}{5}, \frac{4}{5}, 0\right)$ and the origin. You may express your answer in the form of a product of matrices.

1.II.8D

commentDefine what is meant by a vector space $V$ over the real numbers $\mathbb{R}$. Define subspace, proper subspace, spanning set, basis, and dimension.

Define the sum $U+W$ and intersection $U \cap W$ of two subspaces $U$ and $W$ of a vector space $V$. Why is the intersection never empty?

Let $V=\mathbb{R}^{4}$ and let $U=\left\{\mathbf{x} \in V: x_{1}-x_{2}+x_{3}-x_{4}=0\right\}$, where $\mathbf{x}=\left(x_{1}, x_{2}, x_{3}, x_{4}\right)$, and let $W=\left\{\mathbf{x} \in V: x_{1}-x_{2}-x_{3}+x_{4}=0\right\}$. Show that $U \cap W$ has the orthogonal basis $\mathbf{b}_{1}, \mathbf{b}_{2}$ where $\mathbf{b}_{1}=(1,1,0,0)$ and $\mathbf{b}_{2}=(0,0,1,1)$. Extend this basis to find orthogonal bases of $U, W$, and $U+W$. Show that $U+W=V$ and hence verify that, in this case,

$\operatorname{dim} U+\operatorname{dim} W=\operatorname{dim}(U+W)+\operatorname{dim}(U \cap W)$

3.I.1A

commentGiven two real non-zero $2 \times 2$ matrices $A$ and $B$, with $A B=0$, show that $A$ maps $\mathbb{R}^{2}$ onto a line. Is it always true that $B A=0 ?$ Show that there is always a non-zero matrix $C$ with $C A=0=A C$. Justify your answers.

3.I.2B

comment(a) What does it mean for a group to be cyclic? Give an example of a finite abelian group that is not cyclic, and justify your assertion.

(b) Suppose that $G$ is a finite group of rotations of $\mathbb{R}^{2}$ about the origin. Is $G$ necessarily cyclic? Justify your answer.

3.II.5E

commentProve, using the standard formula connecting $\delta_{i j}$ and $\epsilon_{i j k}$, that

$\mathbf{a} \times(\mathbf{b} \times \mathbf{c})=(\mathbf{a} \cdot \mathbf{c}) \mathbf{b}-(\mathbf{a} \cdot \mathbf{b}) \mathbf{c}$

Define, in terms of the dot and cross product, the triple scalar product [a, b, c $]$ of three vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$ in $\mathbb{R}^{3}$ and show that it is invariant under cyclic permutation of the vectors.

Let $\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}$ be a not necessarily orthonormal basis for $\mathbb{R}^{3}$, and define

$\hat{\mathbf{e}}_{1}=\frac{\mathbf{e}_{2} \times \mathbf{e}_{3}}{\left[\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}\right]}, \quad \hat{\mathbf{e}}_{2}=\frac{\mathbf{e}_{3} \times \mathbf{e}_{1}}{\left[\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}\right]}, \quad \hat{\mathbf{e}}_{3}=\frac{\mathbf{e}_{1} \times \mathbf{e}_{2}}{\left[\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}\right]} .$

By calculating $\left[\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}\right]$, show that $\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}$ is also a basis for $\mathbb{R}^{3}$.

The vectors $\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}$ are constructed from $\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}$ in the same way that $\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}$ are constructed from $\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}$. Show that

$\hat{\mathbf{e}}_{1}=\mathbf{e}_{1}, \hat{\hat{\mathbf{e}}}_{2}=\mathbf{e}_{2}, \hat{\mathbf{e}}_{3}=\mathbf{e}_{3},$

Show that a vector $\mathbf{V}$ has components $\mathbf{V} \cdot \hat{\mathbf{e}}_{1}, \mathbf{V} \cdot \hat{\mathbf{e}}_{2}, \mathbf{V} \cdot \hat{\mathbf{e}}_{3}$ with respect to the basis $\mathbf{e}_{1}, \mathbf{e}_{2}, \mathbf{e}_{3}$. What are the components of the vector $\mathbf{V}$ with respect to the basis $\hat{\mathbf{e}}_{1}, \hat{\mathbf{e}}_{2}, \hat{\mathbf{e}}_{3}$ ?

3.II.6E

comment(a) Give the general solution for $\mathbf{x}$ and $\mathbf{y}$ of the equations

$\mathbf{x}+\mathbf{y}=2 \mathbf{a}, \quad \mathbf{x} \cdot \mathbf{y}=c \quad(c<\mathbf{a} \cdot \mathbf{a})$

Show in particular that $\mathbf{x}$ and $\mathbf{y}$ must lie at opposite ends of a diameter of a sphere whose centre and radius should be specified.

(b) If two pairs of opposite edges of a tetrahedron are perpendicular, show that the third pair are also perpendicular to each other. Show also that the sum of the lengths squared of two opposite edges is the same for each pair.

3.II.7A

commentExplain why the number of solutions $\mathbf{x} \in \mathbb{R}^{3}$ of the simultaneous linear equations $A \mathbf{x}=\mathbf{b}$ is 0,1 or infinite, where $A$ is a real $3 \times 3$ matrix and $\mathbf{b} \in \mathbb{R}^{3}$. Let $\alpha$ be the mapping which $A$ represents. State necessary and sufficient conditions on $\mathbf{b}$ and $\alpha$ for each of these possibilities to hold.

Let $A$ and $B$ be $3 \times 3$ matrices representing linear mappings $\alpha$ and $\beta$. Give necessary and sufficient conditions on $\alpha$ and $\beta$ for the existence of a $3 \times 3$ matrix $X$ with $A X=B$. When is $X$ unique?

Find $X$ when

$A=\left(\begin{array}{lll} 4 & 1 & 1 \\ 1 & 2 & 1 \\ 0 & 3 & 1 \end{array}\right), \quad B=\left(\begin{array}{lll} 1 & 1 & 1 \\ 0 & 1 & 0 \\ 3 & 1 & 2 \end{array}\right)$

3.II.8B

commentSuppose that a,b, $\mathbf{c}, \mathbf{d}$ are the vertices of a regular tetrahedron $T$ in $\mathbb{R}^{3}$ and that $\mathbf{a}=(1,1,1), \mathbf{b}=(-1,-1,1), \mathbf{c}=(-1,1,-1), \mathbf{d}=(1, x, y)$.

(a) Find $x$ and $y$.

(b) Find a matrix $M$ that is a rotation leaving $T$ invariant such that $M \mathbf{a}=\mathbf{b}$ and $M \mathbf{b}=\mathbf{a} .$

1.I $. 3 C$

commentSuppose $a_{n} \in \mathbb{R}$ for $n \geqslant 1$ and $a \in \mathbb{R}$. What does it mean to say that $a_{n} \rightarrow a$ as $n \rightarrow \infty$ ? What does it mean to say that $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$ ?

Show that, if $a_{n} \neq 0$ for all $n$ and $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$, then $1 / a_{n} \rightarrow 0$ as $n \rightarrow \infty$. Is the converse true? Give a proof or a counter example.

Show that, if $a_{n} \neq 0$ for all $n$ and $a_{n} \rightarrow a$ with $a \neq 0$, then $1 / a_{n} \rightarrow 1 / a$ as $n \rightarrow \infty$.

1.I.4C

commentShow that any bounded sequence of real numbers has a convergent subsequence.

Give an example of a sequence of real numbers with no convergent subsequence.

Give an example of an unbounded sequence of real numbers with a convergent subsequence.

1.II.10C

commentShow that a continuous real-valued function on a closed bounded interval is bounded and attains its bounds.

Write down examples of the following functions (no proof is required).

(i) A continuous function $f_{1}:(0,1) \rightarrow \mathbb{R}$ which is not bounded.

(ii) A continuous function $f_{2}:(0,1) \rightarrow \mathbb{R}$ which is bounded but does not attain its bounds.

(iii) A bounded function $f_{3}:[0,1] \rightarrow \mathbb{R}$ which is not continuous.

(iv) A function $f_{4}:[0,1] \rightarrow \mathbb{R}$ which is not bounded on any interval $[a, b]$ with $0 \leqslant a<b \leqslant 1 .$

[Hint: Consider first how to define $f_{4}$ on the rationals.]

1.II.11C

commentState the mean value theorem and deduce it from Rolle's theorem.

Use the mean value theorem to show that, if $h: \mathbb{R} \rightarrow \mathbb{R}$ is differentiable with $h^{\prime}(x)=0$ for all $x$, then $h$ is constant.

By considering the derivative of the function $g$ given by $g(x)=e^{-a x} f(x)$, find all the solutions of the differential equation $f^{\prime}(x)=a f(x)$ where $f: \mathbb{R} \rightarrow \mathbb{R}$ is differentiable and $a$ is a fixed real number.

Show that, if $f: \mathbb{R} \rightarrow \mathbb{R}$ is continuous, then the function $F: \mathbb{R} \rightarrow \mathbb{R}$ given by

$F(x)=\int_{0}^{x} f(t) d t$

is differentiable with $F^{\prime}(x)=f(x)$.

Find the solution of the equation

$g(x)=A+\int_{0}^{x} g(t) d t$

where $g: \mathbb{R} \rightarrow \mathbb{R}$ is differentiable and $A$ is a real number. You should explain why the solution is unique.

1.II.12C

commentProve Taylor's theorem with some form of remainder.

An infinitely differentiable function $f: \mathbb{R} \rightarrow \mathbb{R}$ satisfies the differential equation

$f^{(3)}(x)=f(x)$

and the conditions $f(0)=1, f^{\prime}(0)=f^{\prime \prime}(0)=0$. If $R>0$ and $j$ is a positive integer, explain why we can find an $M_{j}$ such that

$\left|f^{(j)}(x)\right| \leqslant M_{j}$

for all $x$ with $|x| \leqslant R$. Explain why we can find an $M$ such that

$\left|f^{(j)}(x)\right| \leqslant M$

for all $x$ with $|x| \leqslant R$ and all $j \geqslant 0$.

Use your form of Taylor's theorem to show that

$f(x)=\sum_{n=0}^{\infty} \frac{x^{3 n}}{(3 n) !}$

1.II.9C

commentState some version of the fundamental axiom of analysis. State the alternating series test and prove it from the fundamental axiom.

In each of the following cases state whether $\sum_{n=1}^{\infty} a_{n}$ converges or diverges and prove your result. You may use any test for convergence provided you state it correctly.

(i) $a_{n}=(-1)^{n}(\log (n+1))^{-1}$.

(ii) $a_{2 n}=(2 n)^{-2}, a_{2 n-1}=-n^{-2}$.

(iii) $a_{3 n-2}=-(2 n-1)^{-1}, a_{3 n-1}=(4 n-1)^{-1}, a_{3 n}=(4 n)^{-1}$.

(iv) $a_{2^{n}+r}=(-1)^{n}\left(2^{n}+r\right)^{-1}$ for $0 \leqslant r \leqslant 2^{n}-1, n \geqslant 0$.

2.I.1D

commentSolve the equation

$\ddot{y}+\dot{y}-2 y=e^{-t}$

subject to the conditions $y(t)=\dot{y}(t)=0$ at $t=0$. Solve the equation

$\ddot{y}+\dot{y}-2 y=e^{t}$

subject to the same conditions $y(t)=\dot{y}(t)=0$ at $t=0$.

2.I.2D

commentConsider the equation

$\frac{d y}{d x}=x\left(\frac{1-y^{2}}{1-x^{2}}\right)^{1 / 2}$

where the positive square root is taken, within the square $\mathcal{S}: 0 \leqslant x<1,0 \leqslant y \leqslant 1$. Find the solution that begins at $x=y=0$. Sketch the corresponding solution curve, commenting on how its tangent behaves near each extremity. By inspection of the righthand side of $(*)$, or otherwise, roughly sketch, using small line segments, the directions of flow throughout the square $\mathcal{S}$.

2.II.5D

commentExplain what is meant by an integrating factor for an equation of the form

$\frac{d y}{d x}+f(x, y)=0$

Show that $2 y e^{x}$ is an integrating factor for

$\frac{d y}{d x}+\frac{2 x+x^{2}+y^{2}}{2 y}=0$

and find the solution $y=y(x)$ such that $y(0)=a$, for given $a>0$.

Show that $2 x+x^{2} \geqslant-1$ for all $x$ and hence that

$\frac{d y}{d x} \leqslant \frac{1-y^{2}}{2 y}$

For a solution with $a \geqslant 1$, show graphically, by considering the sign of $d y / d x$ first for $x=0$ and then for $x<0$, that $d y / d x<0$ for all $x \leqslant 0$.

Sketch the solution for the case $a=1$, and show that property that $d y / d x \rightarrow-\infty$ both as $x \rightarrow-\infty$ and as $x \rightarrow b$ from below, where $b \approx 0.7035$ is the positive number that satisfies $b^{2}=e^{-b}$.

[Do not consider the range $x \geqslant b$.]

2.II.6D

commentSolve the differential equation

$\frac{d y}{d t}=r y(1-a y)$

for the general initial condition $y=y_{0}$ at $t=0$, where $r, a$, and $y_{0}$ are positive constants. Deduce that the equilibria at $y=a^{-1}$ and $y=0$ are stable and unstable, respectively.

By using the approximate finite-difference formula

$\frac{d y}{d t}=\frac{y_{n+1}-y_{n}}{\delta t}$

for the derivative of $y$ at $t=n \delta t$, where $\delta t$ is a positive constant and $y_{n}=y(n \delta t)$, show that the differential equation when thus approximated becomes the difference equation

$u_{n+1}=\lambda\left(1-u_{n}\right) u_{n},$

where $\lambda=1+r \delta t>1$ and where $u_{n}=\lambda^{-1} a(\lambda-1) y_{n}$. Find the two equilibria and, by linearizing the equation about them or otherwise, show that one is always unstable (given that $\lambda>1$ ) and that the other is stable or unstable according as $\lambda<3$ or $\lambda>3$. Show that this last instability is oscillatory with period $2 \delta t$. Why does this last instability have no counterpart for the differential equation? Show graphically how this instability can equilibrate to a periodic, finite-amplitude oscillation when $\lambda=3.2$.

2.II.7D

commentThe homogeneous equation

$\ddot{y}+p(t) \dot{y}+q(t) y=0$

has non-constant, non-singular coefficients $p(t)$ and $q(t)$. Two solutions of the equation, $y(t)=y_{1}(t)$ and $y(t)=y_{2}(t)$, are given. The solutions are known to be such that the determinant

$W(t)=\left|\begin{array}{ll} y_{1} & y_{2} \\ \dot{y}_{1} & \dot{y}_{2} \end{array}\right|$

is non-zero for all $t$. Define what is meant by linear dependence, and show that the two given solutions are linearly independent. Show also that

$W(t) \propto \exp \left(-\int^{t} p(s) d s\right) .$

In the corresponding inhomogeneous equation

$\ddot{y}+p(t) \dot{y}+q(t) y=f(t)$

the right-hand side $f(t)$ is a prescribed forcing function. Construct a particular integral of this inhomogeneous equation in the form

$y(t)=a_{1}(t) y_{1}(t)+a_{2}(t) y_{2}(t),$

where the two functions $a_{i}(t)$ are to be determined such that

$y_{1}(t) \dot{a}_{1}(t)+y_{2}(t) \dot{a}_{2}(t)=0$

for all $t$. Express your result for the functions $a_{i}(t)$ in terms of integrals of the functions $f(t) y_{1}(t) / W(t)$ and $f(t) y_{2}(t) / W(t)$.

Consider the case in which $p(t)=0$ for all $t$ and $q(t)$ is a positive constant, $q=\omega^{2}$ say, and in which the forcing $f(t)=\sin (\omega t)$. Show that in this case $y_{1}(t)$ and $y_{2}(t)$ can be taken as $\cos (\omega t)$ and $\sin (\omega t)$ respectively. Evaluate $f(t) y_{1}(t) / W(t)$ and $f(t) y_{2}(t) / W(t)$ and show that, as $t \rightarrow \infty$, one of the $a_{i}(t)$ increases in magnitude like a power of $t$ to be determined.

2.II.8D

commentFor any solution of the equations

$\begin{aligned} &\dot{x}=\alpha x-y+y^{3} \quad(\alpha \text { constant }) \\ &\dot{y}=-x \end{aligned}$

show that

$\frac{d}{d t}\left(x^{2}-y^{2}+\frac{1}{2} y^{4}\right)=2 \alpha x^{2} .$

What does this imply about the behaviour of phase-plane trajectories at large distances from the origin as $t \rightarrow \infty$, in the case $\alpha=0$ ? Give brief reasoning but do not try to find explicit solutions.

Analyse the properties of the critical points and sketch the phase portrait (a) in the case $\alpha=0$, (b) in the case $\alpha=0.1$, and (c) in the case $\alpha=-0.1$.

4.I.3E

commentThe position $x$ of the leading edge of an avalanche moving down a mountain side making a positive angle $\alpha$ to the horizontal satisfies the equation

$\frac{d}{d t}\left(x \frac{d x}{d t}\right)=g x \sin \alpha$

where $g$ is the acceleration due to gravity.

By multiplying the equation by $x \frac{d x}{d t}$, obtain the first integral

$x^{2} \dot{x}^{2}=\frac{2 g}{3} x^{3} \sin \alpha+c$

where $c$ is an arbitrary constant of integration and the dot denotes differentiation with respect to time.

Sketch the positive quadrant of the $(x, \dot{x})$ phase plane. Show that all solutions approach the trajectory

$\dot{x}=\left(\frac{2 g \sin \alpha}{3}\right)^{\frac{1}{2}} x^{\frac{1}{2}}$

Hence show that, independent of initial conditions, the avalanche ultimately has acceleration $\frac{1}{3} g \sin \alpha$.

4.I.4E

commentAn inertial reference frame $S$ and another reference frame $S^{\prime}$ have a common origin O. $S^{\prime}$ rotates with constant angular velocity $\omega$ with respect to $S$. Assuming the result that

$\left(\frac{d \mathbf{a}}{d t}\right)_{S}=\left(\frac{d \mathbf{a}}{d t}\right)_{S^{\prime}}+\boldsymbol{\omega} \times \mathbf{a}$

for an arbitrary vector $\mathbf{a}(t)$, show that

$\left(\frac{d^{2} \mathbf{x}}{d t^{2}}\right)_{\mathcal{S}}=\left(\frac{d^{2} \mathbf{x}}{d t^{2}}\right)_{\mathcal{S}^{\prime}}+2 \boldsymbol{\omega} \times\left(\frac{d \mathbf{x}}{d t}\right)_{\mathcal{S}^{\prime}}+\boldsymbol{\omega} \times(\boldsymbol{\omega} \times \mathbf{x})$

where $\mathbf{x}$ is the position vector of a point $P$ measured from the origin.

A system of electrically charged particles, all with equal masses $m$ and charges $e$, moves under the influence of mutual central forces $\mathbf{F}_{i j}$ of the form

$\mathbf{F}_{i j}=\left(\mathbf{x}_{i}-\mathbf{x}_{j}\right) f\left(\left|\mathbf{x}_{i}-\mathbf{x}_{j}\right|\right)$

In addition each particle experiences a Lorentz force due to a constant weak magnetic field $\mathbf{B}$ given by

$e \frac{d \mathbf{x}_{i}}{d t} \times \mathbf{B}$

Transform the equations of motion to the rotating frame $\mathcal{S}^{\prime}$. Show that if the angular velocity is chosen to satisfy

$\boldsymbol{\omega}=-\frac{e}{2 m} \mathbf{B}$

and if terms of second order in $\mathbf{B}$ are neglected, then the equations of motion in the rotating frame are identical to those in the non-rotating frame in the absence of the magnetic field B.

4.II.10E

commentDerive the equation

$\frac{d^{2} u}{d \theta^{2}}+u=\frac{f(u)}{m h^{2} u^{2}}$

for the orbit $r^{-1}=u(\theta)$ of a particle of mass $m$ and angular momentum $h m$ moving under a central force $f(u)$ directed towards a fixed point $O$. Give an interpretation of $h$ in terms of the area swept out by a radius vector.

If the orbits are found to be circles passing through $O$, then deduce that the force varies inversely as the fifth power of the distance, $f=c u^{5}$, where $c$ is a constant. Is the force attractive or repulsive?

Show that, for fixed mass, the radius $R$ of the circle varies inversely as the angular momentum of the particle, and hence that the time taken to traverse a complete circle is proportional to $R^{3}$.

[You may assume, if you wish, the expressions for radial and transverse acceleration in the forms $\ddot{r}-r \dot{\theta}^{2}, 2 \dot{r} \dot{\theta}+r \ddot{\theta}$.]

4.II.11E

commentAn electron of mass $m$ moving with velocity $\dot{\mathbf{x}}$ in the vicinity of the North Pole experiences a force

$\mathbf{F}=a \dot{\mathbf{x}} \times \frac{\mathbf{x}}{|\mathbf{x}|^{3}},$

where $a$ is a constant and the position vector $\mathbf{x}$ of the particle is with respect to an origin located at the North Pole. Write down the equation of motion of the electron, neglecting gravity. By taking the dot product of the equation with $\dot{x}$ show that the speed of the electron is constant. By taking the cross product of the equation with $\mathbf{x}$ show that

$m \mathbf{x} \times \dot{\mathbf{x}}-a \frac{\mathbf{x}}{|\mathbf{x}|}=\mathbf{L}$

where $\mathbf{L}$ is a constant vector. By taking the dot product of this equation with $\mathbf{x}$, show that the electron moves on a cone centred on the North Pole.

4.II.12E

commentCalculate the moment of inertia of a uniform rod of length $2 l$ and mass $M$ about an axis through its centre and perpendicular to its length. Assuming it moves in a plane, give an expression for the kinetic energy of the rod in terms of the speed of the centre and the angle that it makes with a fixed direction.

Two such rods are freely hinged together at one end and the other two ends slide on a perfectly smooth horizontal floor. The rods are initially at rest and lie in a vertical plane, each making an angle $\alpha$ to the horizontal. The rods subsequently move under gravity. Calculate the speed with which the hinge strikes the ground.

4.II.9E

commentWrite down the equations of motion for a system of $n$ gravitating point particles with masses $m_{i}$ and position vectors $\mathbf{x}_{i}=\mathbf{x}_{i}(t), i=1,2, \ldots, n$.

Assume that $\mathbf{x}_{i}=t^{2 / 3} \mathbf{a}_{i}$, where the vectors $\mathbf{a}_{i}$ are independent of time $t$. Obtain a system of equations for the vectors $\mathbf{a}_{i}$ which does not involve the time variable $t$.

Show that the constant vectors $\mathbf{a}_{i}$ must be located at stationary points of the function

$\sum_{i} \frac{1}{9} m_{i} \mathbf{a}_{i} \cdot \mathbf{a}_{i}+\frac{1}{2} \sum_{j} \sum_{i \neq j} \frac{G m_{i} m_{j}}{\left|\mathbf{a}_{i}-\mathbf{a}_{j}\right|}$

Show that for this system, the total angular momentum about the origin and the total momentum both vanish. What is the angular momentum about any other point?

4.I.1C

commentWhat does it mean to say that a function $f: A \rightarrow B$ is injective? What does it mean to say that a function $g: A \rightarrow B$ is surjective?

Consider the functions $f: A \rightarrow B, g: B \rightarrow C$ and their composition $g \circ f: A \rightarrow C$ given by $g \circ f(a)=g(f(a))$. Prove the following results.

(i) If $f$ and $g$ are surjective, then so is $g \circ f$.

(ii) If $f$ and $g$ are injective, then so is $g \circ f$.

(iii) If $g \circ f$ is injective, then so is $f$.

(iv) If $g \circ f$ is surjective, then so is $g$.

Give an example where $g \circ f$ is injective and surjective but $f$ is not surjective and $g$ is not injective.

4.I.2C

commentIf $f, g: \mathbb{R} \rightarrow \mathbb{R}$ are infinitely differentiable, Leibniz's rule states that, if $n \geqslant 1$,

$\frac{d^{n}}{d x^{n}}(f(x) g(x))=\sum_{r=0}^{n}\left(\begin{array}{c} n \\ r \end{array}\right) f^{(n-r)}(x) g^{(r)}(x)$

Prove this result by induction. (You should prove any results on binomial coefficients that you need.)

4.II $. 7 \mathrm{~B} \quad$

comment(a) Suppose that $p$ is an odd prime. Find $1^{p}+2^{p}+\ldots+(p-1)^{p}$ modulo $p$.

(b) Find $(p-1)$ ! modulo $(1+2+\ldots+(p-1))$, when $p$ is an odd prime.

4.II.5F

commentWhat is meant by saying that a set is countable?

Prove that the union of countably many countable sets is itself countable.

Let $\left\{J_{i}: i \in I\right\}$ be a collection of disjoint intervals of the real line, each having strictly positive length. Prove that the index set $I$ is countable.

4.II.6F

comment(a) Let $S$ be a finite set, and let $\mathbb{P}(S)$ be the power set of $S$, that is, the set of all subsets of $S$. Let $f: \mathbb{P}(S) \rightarrow \mathbb{R}$ be additive in the sense that $f(A \cup B)=f(A)+f(B)$ whenever $A \cap B=\varnothing$. Show that, for $A_{1}, A_{2}, \ldots, A_{n} \in \mathbb{P}(S)$,

$\begin{aligned} f\left(\bigcup_{i} A_{i}\right)=\sum_{i} f\left(A_{i}\right)-\sum_{i<j} f\left(A_{i} \cap A_{j}\right) &+\sum_{i<j<k} f\left(A_{i} \cap A_{j} \cap A_{k}\right) \\ &-\cdots+(-1)^{n+1} f\left(\bigcap_{i} A_{i}\right) \end{aligned}$

(b) Let $A_{1}, A_{2}, \ldots, A_{n}$ be finite sets. Deduce from part (a) the inclusion-exclusion formula for the size (or cardinality) of $\bigcup_{i} A_{i}$.

(c) A derangement of the set $S=\{1,2, \ldots, n\}$ is a permutation $\pi$ (that is, a bijection from $S$ to itself) in which no member of the set is fixed (that is, $\pi(i) \neq i$ for all $i$ ). Using the inclusion-exclusion formula, show that the number $d_{n}$ of derangements satisfies $d_{n} / n ! \rightarrow e^{-1}$ as $n \rightarrow \infty$.

4.II.8B

commentSuppose that $a, b$ are coprime positive integers. Write down an integer $d>0$ such that $a^{d} \equiv 1$ modulo $b$. The least such $d$ is the order of $a$ modulo $b$. Show that if the order of $a$ modulo $b$ is $y$, and $a^{x} \equiv 1$ modulo $b$, then $y$ divides $x$.

Let $n \geqslant 2$ and $F_{n}=2^{2^{n}}+1$. Suppose that $p$ is a prime factor of $F_{n}$. Find the order of 2 modulo $p$, and show that $p \equiv 1$ modulo $2^{n+1}$.

2.I.3F

commentDefine the indicator function $I_{A}$ of an event $A$.

Let $I_{i}$ be the indicator function of the event $A_{i}, 1 \leq i \leq n$, and let $N=\sum_{1}^{n} I_{i}$ be the number of values of $i$ such that $A_{i}$ occurs. Show that $E(N)=\sum_{i} p_{i}$ where $p_{i}=P\left(A_{i}\right)$, and find $\operatorname{var}(N)$ in terms of the quantities $p_{i j}=P\left(A_{i} \cap A_{j}\right)$.

Using Chebyshev's inequality or otherwise, show that

$P(N=0) \leq \frac{\operatorname{var}(N)}{\{E(N)\}^{2}}$

2.I.4F

commentA coin shows heads with probability $p$ on each toss. Let $\pi_{n}$ be the probability that the number of heads after $n$ tosses is even. Show carefully that $\pi_{n+1}=(1-p) \pi_{n}+p\left(1-\pi_{n}\right)$, $n \geq 1$, and hence find $\pi_{n}$. [The number 0 is even.]

2.II.10F

commentThere is a random number $N$ of foreign objects in my soup, with mean $\mu$ and finite variance. Each object is a fly with probability $p$, and otherwise is a spider; different objects have independent types. Let $F$ be the number of flies and $S$ the number of spiders.

(a) Show that $G_{F}(s)=G_{N}(p s+1-p) .\left[G_{X}\right.$ denotes the probability generating function of a random variable $X$. You should present a clear statement of any general result used.]

(b) Suppose $N$ has the Poisson distribution with parameter $\mu$. Show that $F$ has the Poisson distribution with parameter $\mu p$, and that $F$ and $S$ are independent.

(c) Let $p=\frac{1}{2}$ and suppose that $F$ and $S$ are independent. [You are given nothing about the distribution of $N$.] Show that $G_{N}(s)=G_{N}\left(\frac{1}{2}(1+s)\right)^{2}$. By working with the function $H(s)=G_{N}(1-s)$ or otherwise, deduce that $N$ has the Poisson distribution. [You may assume that $\left(1+\frac{x}{n}+\mathrm{o}\left(n^{-1}\right)\right)^{n} \rightarrow e^{x}$ as $n \rightarrow \infty$.]

2.II.11F

commentLet $X, Y, Z$ be independent random variables each with the uniform distribution on the interval $[0,1]$.

(a) Show that $X+Y$ has density function

$f_{X+Y}(u)= \begin{cases}u & \text { if } 0 \leq u \leq 1 \\ 2-u & \text { if } 1 \leq u \leq 2 \\ 0 & \text { otherwise }\end{cases}$

(b) Show that $P(Z>X+Y)=\frac{1}{6}$.

(c) You are provided with three rods of respective lengths $X, Y, Z$. Show that the probability that these rods may be used to form the sides of a triangle is $\frac{1}{2}$.

(d) Find the density function $f_{X+Y+Z}(s)$ of $X+Y+Z$ for $0 \leqslant s \leqslant 1$. Let $W$ be uniformly distributed on $[0,1]$, and independent of $X, Y, Z$. Show that the probability that rods of lengths $W, X, Y, Z$ may be used to form the sides of a quadrilateral is $\frac{5}{6}$.

2.II.12F

comment(a) Explain what is meant by the term 'branching process'.

(b) Let $X_{n}$ be the size of the $n$th generation of a branching process in which each family size has probability generating function $G$, and assume that $X_{0}=1$. Show that the probability generating function $G_{n}$ of $X_{n}$ satisfies $G_{n+1}(s)=G_{n}(G(s))$ for $n \geq 1$.

(c) Show that $G(s)=1-\alpha(1-s)^{\beta}$ is the probability generating function of a non-negative integer-valued random variable when $\alpha, \beta \in(0,1)$, and find $G_{n}$ explicitly when $G$ is thus given.

(d) Find the probability that $X_{n}=0$, and show that it converges as $n \rightarrow \infty$ to $1-\alpha^{1 /(1-\beta)}$. Explain carefully why this implies that the probability of ultimate extinction equals $1-\alpha^{1 /(1-\beta)}$.

2.II.9F

comment(a) Define the conditional probability $P(A \mid B)$ of the event $A$ given the event $B$. Let $\left\{B_{i}: 1 \leq i \leq n\right\}$ be a partition of the sample space $\Omega$ such that $P\left(B_{i}\right)>0$ for all $i$. Show that, if $P(A)>0$,

$P\left(B_{i} \mid A\right)=\frac{P\left(A \mid B_{i}\right) P\left(B_{i}\right)}{\sum_{j} P\left(A \mid B_{j}\right) P\left(B_{j}\right)} .$

(b) There are $n$ urns, the $r$ th of which contains $r-1$ red balls and $n-r$ blue balls. You pick an urn (uniformly) at random and remove two balls without replacement. Find the probability that the first ball is blue, and the conditional probability that the second ball is blue given that the first is blue. [You may assume that $\sum_{i=1}^{n-1} i(i-1)=\frac{1}{3} n(n-1)(n-2)$.]

(c) What is meant by saying that two events $A$ and $B$ are independent?

(d) Two fair dice are rolled. Let $A_{s}$ be the event that the sum of the numbers shown is $s$, and let $B_{i}$ be the event that the first die shows $i$. For what values of $s$ and $i$ are the two events $A_{s}, B_{i}$ independent?

3.I.3A

commentDetermine whether each of the following is the exact differential of a function, and if so, find such a function: (a) $(\cosh \theta+\sinh \theta \cos \phi) d \theta+(\cosh \theta \sin \phi+\cos \phi) d \phi$, (b) $3 x^{2}\left(y^{2}+1\right) d x+2\left(y x^{3}-z^{2}\right) d y-4 y z d z$.

3.I.4A

commentState the divergence theorem.

Consider the integral

$I=\int_{S} r^{n} \mathbf{r} \cdot d \mathbf{S}$

where $n>0$ and $S$ is the sphere of radius $R$ centred at the origin. Evaluate $I$ directly, and by means of the divergence theorem.

3.II.10A

commentThe domain $S$ in the $(x, y)$ plane is bounded by $y=x, y=a x(0 \leqslant a \leqslant 1)$ and $x y^{2}=1(x, y \geqslant 0)$. Find a transformation

$u=f(x, y), \quad v=g(x, y)$

such that $S$ is transformed into a rectangle in the $(u, v)$ plane.

Evaluate

$\int_{D} \frac{y^{2} z^{2}}{x} d x d y d z$

where $D$ is the region bounded by

$y=x, \quad y=z x, \quad x y^{2}=1 \quad(x, y \geqslant 0)$

and the planes

$z=0, \quad z=1$

3.II.11A

commentProve that

$\nabla \times(\mathbf{a} \times \mathbf{b})=\mathbf{a} \nabla \cdot \mathbf{b}-\mathbf{b} \nabla \cdot \mathbf{a}+(\mathbf{b} \cdot \nabla) \mathbf{a}-(\mathbf{a} \cdot \nabla) \mathbf{b}$

$S$ is an open orientable surface in $\mathbb{R}^{3}$ with unit normal $\mathbf{n}$, and $\mathbf{v}(\mathbf{x})$ is any continuously differentiable vector field such that $\mathbf{n} \cdot \mathbf{v}=0$ on $S$. Let $\mathbf{m}$ be a continuously differentiable unit vector field which coincides with $\mathbf{n}$ on $S$. By applying Stokes' theorem to $\mathbf{m} \times \mathbf{v}$, show that

$\int_{S}\left(\delta_{i j}-n_{i} n_{j}\right) \frac{\partial v_{i}}{\partial x_{j}} d S=\oint_{C} \mathbf{u} \cdot \mathbf{v} d s$

where $s$ denotes arc-length along the boundary $C$ of $S$, and $\mathbf{u}$ is such that $\mathbf{u} d s=d \mathbf{s} \times \mathbf{n}$. Verify this result by taking $\mathbf{v}=\mathbf{r}$, and $S$ to be the disc $|\mathbf{r}| \leqslant R$ in the $z=0$ plane.

3.II.12A

comment(a) Show, using Cartesian coordinates, that $\psi=1 / r$ satisfies Laplace's equation, $\nabla^{2} \psi=0$, on $\mathbb{R}^{3} \backslash\{0\} .$

(b) $\phi$ and $\psi$ are smooth functions defined in a 3-dimensional domain $V$ bounded by a smooth surface $S$. Show that

$\int_{V}\left(\phi \nabla^{2} \psi-\psi \nabla^{2} \phi\right) d V=\int_{S}(\phi \nabla \psi-\psi \nabla \phi) \cdot d \mathbf{S}$

(c) Let $\psi=1 /\left|\mathbf{r}-\mathbf{r}_{0}\right|$, and let $V_{\varepsilon}$ be a domain bounded by a smooth outer surface $S$ and an inner surface $S_{\varepsilon}$, where $S_{\varepsilon}$ is a sphere of radius $\varepsilon$, centre $\mathbf{r}_{0}$. The function $\phi$ satisfies

$\nabla^{2} \phi=-\rho(\mathbf{r}) .$

Use parts (a) and (b) to show, taking the limit $\varepsilon \rightarrow 0$, that $\phi$ at $\mathbf{r}_{0}$ is given by

$4 \pi \phi\left(\mathbf{r}_{0}\right)=\int_{V} \frac{\rho(\mathbf{r})}{\left|\mathbf{r}-\mathbf{r}_{0}\right|} d V+\int_{S}\left(\frac{1}{\left|\mathbf{r}-\mathbf{r}_{0}\right|} \frac{\partial \phi}{\partial n}-\phi(\mathbf{r}) \frac{\partial}{\partial n} \frac{1}{\left|\mathbf{r}-\mathbf{r}_{0}\right|}\right) d S,$

where $V$ is the domain bounded by $S$.

3.II.9A

commentTwo independent variables $x_{1}$ and $x_{2}$ are related to a third variable $t$ by

$x_{1}=a+\alpha t, \quad x_{2}=b+\beta t,$

where $a, b, \alpha$ and $\beta$ are constants. Let $f$ be a smooth function of $x_{1}$ and $x_{2}$, and let $F(t)=f\left(x_{1}, x_{2}\right)$. Show, by using the Taylor series for $F(t)$ about $t=0$, that

$\begin{gathered} f\left(x_{1}, x_{2}\right)=f(a, b)+\left(x_{1}-a\right) \frac{\partial f}{\partial x_{1}}+\left(x_{2}-b\right) \frac{\partial f}{\partial x_{2}} \\ +\frac{1}{2}\left(\left(x_{1}-a\right)^{2} \frac{\partial^{2} f}{\partial x_{1}^{2}}+2\left(x_{1}-a\right)\left(x_{2}-b\right) \frac{\partial^{2} f}{\partial x_{1} \partial x_{2}}+\left(x_{2}-b\right)^{2} \frac{\partial^{2} f}{\partial x_{2}^{2}}\right)+\ldots \end{gathered}$

where all derivatives are evaluated at $x_{1}=a, x_{2}=b$.

Hence show that a stationary point $(a, b)$ of $f\left(x_{1}, x_{2}\right)$ is a local minimum if

$H_{11}>0, \quad \operatorname{det} H_{i j}>0$

where $H_{i j}=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}}$ is the Hessian matrix evaluated at $(a, b)$.

Find two local minima of

$f\left(x_{1}, x_{2}\right)=x_{1}^{4}-x_{1}^{2}+2 x_{1} x_{2}+x_{2}^{2}$