• Paper 2, Section I, A

Let $f(x, y)=g(u, v)$ where the variables $\{x, y\}$ and $\{u, v\}$ are related by a smooth, invertible transformation. State the chain rule expressing the derivatives $\frac{\partial g}{\partial u}$ and $\frac{\partial g}{\partial v}$ in terms of $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ and use this to deduce that

$\frac{\partial^{2} g}{\partial u \partial v}=\frac{\partial x}{\partial u} \frac{\partial x}{\partial v} \frac{\partial^{2} f}{\partial x^{2}}+\left(\frac{\partial x}{\partial u} \frac{\partial y}{\partial v}+\frac{\partial x}{\partial v} \frac{\partial y}{\partial u}\right) \frac{\partial^{2} f}{\partial x \partial y}+\frac{\partial y}{\partial u} \frac{\partial y}{\partial v} \frac{\partial^{2} f}{\partial y^{2}}+H \frac{\partial f}{\partial x}+K \frac{\partial f}{\partial y}$

where $H$ and $K$ are second-order partial derivatives, to be determined.

Using the transformation $x=u v$ and $y=u / v$ in the above identity, or otherwise, find the general solution of

$x \frac{\partial^{2} f}{\partial x^{2}}-\frac{y^{2}}{x} \frac{\partial^{2} f}{\partial y^{2}}+\frac{\partial f}{\partial x}-\frac{y}{x} \frac{\partial f}{\partial y}=0$

comment
• Paper 2, Section I, A

Find the general solutions to the following difference equations for $y_{n}, n \in \mathbb{N}$.

\begin{aligned} \text { (i) } & y_{n+3}-3 y_{n+1}+2 y_{n}=0, \\ \text { (ii) } & y_{n+3}-3 y_{n+1}+2 y_{n}=2^{n} \\ \text { (iii) } & y_{n+3}-3 y_{n+1}+2 y_{n}=(-2)^{n} \\ \text { (iv) } & y_{n+3}-3 y_{n+1}+2 y_{n}=(-2)^{n}+2^{n} . \end{aligned}

comment
• Paper 2, Section II, $6 \mathrm{~A}$

(a) By using a power series of the form

$y(x)=\sum_{k=0}^{\infty} a_{k} x^{k}$

or otherwise, find the general solution of the differential equation

$x y^{\prime \prime}-(1-x) y^{\prime}-y=0 .$

(b) Define the Wronskian $W(x)$ for a second order linear differential equation

$y^{\prime \prime}+p(x) y^{\prime}+q(x) y=0$

and show that $W^{\prime}+p(x) W=0$. Given a non-trivial solution $y_{1}(x)$ of $(2)$ show that $W(x)$ can be used to find a second solution $y_{2}(x)$ of $(2)$ and give an expression for $y_{2}(x)$ in the form of an integral.

(c) Consider the equation (2) with

$p(x)=-\frac{P(x)}{x} \quad \text { and } \quad q(x)=-\frac{Q(x)}{x}$

where $P$ and $Q$ have Taylor expansions

$P(x)=P_{0}+P_{1} x+\ldots, \quad Q(x)=Q_{0}+Q_{1} x+\ldots$

with $P_{0}$ a positive integer. Find the roots of the indicial equation for (2) with these assumptions. If $y_{1}(x)=1+\beta x+\ldots$ is a solution, use the method of part (b) to find the first two terms in a power series expansion of a linearly independent solution $y_{2}(x)$, expressing the coefficients in terms of $P_{0}, P_{1}$ and $\beta$.

comment
• Paper 2, Section II, A

(a) Consider the differential equation

$a_{n} \frac{d^{n} y}{d x^{n}}+a_{n-1} \frac{d^{n-1} y}{d x^{n-1}}+\ldots+a_{2} \frac{d^{2} y}{d x^{2}}+a_{1} \frac{d y}{d x}+a_{0} y=0$

with $n \in \mathbb{N}$ and $a_{0}, \ldots, a_{n} \in \mathbb{R}$. Show that $y(x)=e^{\lambda x}$ is a solution if and only if $p(\lambda)=0$ where

$p(\lambda)=a_{n} \lambda^{n}+a_{n-1} \lambda^{n-1}+\ldots+a_{2} \lambda^{2}+a_{1} \lambda+a_{0}$

Show further that $y(x)=x e^{\mu x}$ is also a solution of $(1)$ if $\mu$ is a root of the polynomial $p(\lambda)$ of multiplicity at least 2 .

(b) By considering $v(t)=\frac{d^{2} u}{d t^{2}}$, or otherwise, find the general real solution for $u(t)$ satisfying

$\frac{d^{4} u}{d t^{4}}+2 \frac{d^{2} u}{d t^{2}}=4 t^{2}$

By using a substitution of the form $u(t)=y\left(t^{2}\right)$ in $(2)$, or otherwise, find the general real solution for $y(x)$, with $x$ positive, where

$4 x^{2} \frac{d^{4} y}{d x^{4}}+12 x \frac{d^{3} y}{d x^{3}}+(3+2 x) \frac{d^{2} y}{d x^{2}}+\frac{d y}{d x}=x$

comment
• Paper 2, Section II, A

(a) State how the nature of a critical (or stationary) point of a function $f(\mathbf{x})$ with $\mathbf{x} \in \mathbb{R}^{n}$ can be determined by consideration of the eigenvalues of the Hessian matrix $H$ of $f(\mathbf{x})$, assuming $H$ is non-singular.

(b) Let $f(x, y)=x y(1-x-y)$. Find all the critical points of the function $f(x, y)$ and determine their nature. Determine the zero contour of $f(x, y)$ and sketch a contour plot showing the behaviour of the contours in the neighbourhood of the critical points.

(c) Now let $g(x, y)=x^{3} y^{2}(1-x-y)$. Show that $(0,1)$ is a critical point of $g(x, y)$ for which the Hessian matrix of $g$ is singular. Find an approximation for $g(x, y)$ to lowest non-trivial order in the neighbourhood of the point $(0,1)$. Does $g$ have a maximum or a minimum at $(0,1)$ ? Justify your answer.

comment
• Paper 2, Section II, A

(a) Find the general solution of the system of differential equations

$\left(\begin{array}{l} \dot{x} \\ \dot{y} \\ \dot{z} \end{array}\right)=\left(\begin{array}{rrr} -1 & 2 & -1 \\ 1 & 0 & -1 \\ 1 & -2 & 1 \end{array}\right)\left(\begin{array}{l} x \\ y \\ z \end{array}\right)$

(b) Depending on the parameter $\lambda \in \mathbb{R}$, find the general solution of the system of differential equations

$\left(\begin{array}{l} \dot{x} \\ \dot{y} \\ \dot{z} \end{array}\right)=\left(\begin{array}{rrr} -1 & 2 & -1 \\ 1 & 0 & -1 \\ 1 & -2 & 1 \end{array}\right)\left(\begin{array}{l} x \\ y \\ z \end{array}\right)+2\left(\begin{array}{r} -\lambda \\ 1 \\ \lambda \end{array}\right) e^{2 t},$

and explain why $(2)$ has a particular solution of the form $\mathbf{c} e^{2 t}$ with constant vector $\mathbf{c} \in \mathbb{R}^{3}$ for $\lambda=1$ but not for $\lambda \neq 1$.

[Hint: decompose $\left(\begin{array}{c}-\lambda \\ 1 \\ \lambda\end{array}\right)$ in terms of the eigenbasis of the matrix in (1).]

(c) For $\lambda=-1$, find the solution of (2) which goes through the point $(0,1,0)$ at $t=0$.

comment

• Paper 2, Section I, F

Let $X$ and $Y$ be two non-constant random variables with finite variances. The correlation coefficient $\rho(X, Y)$ is defined by

$\rho(X, Y)=\frac{\mathbb{E}[(X-\mathbb{E} X)(Y-\mathbb{E} Y)]}{(\operatorname{Var} X)^{1 / 2}(\operatorname{Var} Y)^{1 / 2}}$

(a) Using the Cauchy-Schwarz inequality or otherwise, prove that

$-1 \leqslant \rho(X, Y) \leqslant 1$

(b) What can be said about the relationship between $X$ and $Y$ when either (i) $\rho(X, Y)=0$ or (ii) $|\rho(X, Y)|=1$. [Proofs are not required.]

(c) Take $0 \leqslant r \leqslant 1$ and let $X, X^{\prime}$ be independent random variables taking values $\pm 1$ with probabilities $1 / 2$. Set

$Y= \begin{cases}X, & \text { with probability } r \\ X^{\prime}, & \text { with probability } 1-r\end{cases}$

Find $\rho(X, Y)$.

comment
• Paper 2, Section I, F

Jensen's inequality states that for a convex function $f$ and a random variable $X$ with a finite mean, $\mathbb{E} f(X) \geqslant f(\mathbb{E} X)$.

(a) Suppose that $f(x)=x^{m}$ where $m$ is a positive integer, and $X$ is a random variable taking values $x_{1}, \ldots, x_{N} \geqslant 0$ with equal probabilities, and where the sum $x_{1}+\ldots+x_{N}=1$. Deduce from Jensen's inequality that

$\sum_{i=1}^{N} f\left(x_{i}\right) \geqslant N f\left(\frac{1}{N}\right)$

(b) $N$ horses take part in $m$ races. The results of different races are independent. The probability for horse $i$ to win any given race is $p_{i} \geqslant 0$, with $p_{1}+\ldots+p_{N}=1$.

Let $Q$ be the probability that a single horse wins all $m$ races. Express $Q$ as a polynomial of degree $m$ in the variables $p_{1}, \ldots, p_{N}$.

By using (1) or otherwise, prove that $Q \geqslant N^{1-m}$.

comment
• Paper 2, Section II, F

Let $X_{1}, X_{2}$ be bivariate normal random variables, with the joint probability density function

$f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right)=\frac{1}{2 \pi \sigma_{1} \sigma_{2} \sqrt{1-\rho^{2}}} \exp \left[-\frac{\varphi\left(x_{1}, x_{2}\right)}{2\left(1-\rho^{2}\right)}\right]$

where

$\varphi\left(x_{1}, x_{2}\right)=\left(\frac{x_{1}-\mu_{1}}{\sigma_{1}}\right)^{2}-2 \rho\left(\frac{x_{1}-\mu_{1}}{\sigma_{1}}\right)\left(\frac{x_{2}-\mu_{2}}{\sigma_{2}}\right)+\left(\frac{x_{2}-\mu_{2}}{\sigma_{2}}\right)^{2}$

and $x_{1}, x_{2} \in \mathbb{R}$.

(a) Deduce that the marginal probability density function

$f_{X_{1}}\left(x_{1}\right)=\frac{1}{\sqrt{2 \pi} \sigma_{1}} \exp \left[-\frac{\left(x_{1}-\mu_{1}\right)^{2}}{2 \sigma_{1}^{2}}\right]$

(b) Write down the moment-generating function of $X_{2}$ in terms of $\mu_{2}$ and $\sigma_{2} \cdot[N o$ proofs are required.]

(c) By considering the ratio $f_{X_{1}, X_{2}}\left(x_{1}, x_{2}\right) / f_{X_{2}}\left(x_{2}\right)$ prove that, conditional on $X_{2}=x_{2}$, the distribution of $X_{1}$ is normal, with mean and variance $\mu_{1}+\rho \sigma_{1}\left(x_{2}-\mu_{2}\right) / \sigma_{2}$ and $\sigma_{1}^{2}\left(1-\rho^{2}\right)$, respectively.

comment
• Paper 2, Section II, F

In a branching process every individual has probability $p_{k}$ of producing exactly $k$ offspring, $k=0,1, \ldots$, and the individuals of each generation produce offspring independently of each other and of individuals in preceding generations. Let $X_{n}$ represent the size of the $n$th generation. Assume that $X_{0}=1$ and $p_{0}>0$ and let $F_{n}(s)$ be the generating function of $X_{n}$. Thus

$F_{1}(s)=\mathbb{E} s^{X_{1}}=\sum_{k=0}^{\infty} p_{k} s^{k},|s| \leqslant 1$

(a) Prove that

$F_{n+1}(s)=F_{n}\left(F_{1}(s)\right)$

(b) State a result in terms of $F_{1}(s)$ about the probability of eventual extinction. [No proofs are required.]

(c) Suppose the probability that an individual leaves $k$ descendants in the next generation is $p_{k}=1 / 2^{k+1}$, for $k \geqslant 0$. Show from the result you state in (b) that extinction is certain. Prove further that in this case

$F_{n}(s)=\frac{n-(n-1) s}{(n+1)-n s}, \quad n \geqslant 1$

and deduce the probability that the $n$th generation is empty.

comment
• Paper 2, Section II, F

The yearly levels of water in the river Camse are independent random variables $X_{1}, X_{2}, \ldots$, with a given continuous distribution function $F(x)=\mathbb{P}\left(X_{i} \leqslant x\right), x \geqslant 0$ and $F(0)=0$. The levels have been observed in years $1, \ldots, n$ and their values $X_{1}, \ldots, X_{n}$ recorded. The local council has decided to construct a dam of height

$Y_{n}=\max \left[X_{1}, \ldots, X_{n}\right]$

Let $\tau$ be the subsequent time that elapses before the dam overflows:

$\tau=\min \left[t \geqslant 1: X_{n+t}>Y_{n}\right]$

(a) Find the distribution function $\mathbb{P}\left(Y_{n} \leqslant z\right), z>0$, and show that the mean value $\mathbb{E} Y_{n}=\int_{0}^{\infty}\left[1-F(z)^{n}\right] \mathrm{d} z .$

(b) Express the conditional probability $\mathbb{P}\left(\tau=k \mid Y_{n}=z\right)$, where $k=1,2, \ldots$ and $z>0$, in terms of $F$.

(c) Show that the unconditional probability

$\mathbb{P}(\tau=k)=\frac{n}{(k+n-1)(k+n)}, \quad k=1,2, \ldots$

(d) Determine the mean value $\mathbb{E} \tau$.

comment
• Paper 2, Section II, F

(a) What does it mean to say that a random variable $X$ with values $n=1,2, \ldots$ has a geometric distribution with a parameter $p$ where $p \in(0,1)$ ?

An expedition is sent to the Himalayas with the objective of catching a pair of wild yaks for breeding. Assume yaks are loners and roam about the Himalayas at random. The probability $p \in(0,1)$ that a given trapped yak is male is independent of prior outcomes. Let $N$ be the number of yaks that must be caught until a breeding pair is obtained. (b) Find the expected value of $N$. (c) Find the variance of $N$.

comment