• # Paper 2, Section I, $2 \mathrm{C}$

Consider the function

$f(x, y)=\frac{x}{y}+\frac{y}{x}-\frac{(x-y)^{2}}{a^{2}}$

defined for $x>0$ and $y>0$, where $a$ is a non-zero real constant. Show that $(\lambda, \lambda)$ is a stationary point of $f$ for each $\lambda>0$. Compute the Hessian and its eigenvalues at $(\lambda, \lambda)$.

comment
• # Paper 2, Section I, C

(a) The numbers $z_{1}, z_{2}, \ldots$ satisfy

$z_{n+1}=z_{n}+c_{n} \quad(n \geqslant 1),$

where $c_{1}, c_{2}, \ldots$ are given constants. Find $z_{n+1}$ in terms of $c_{1}, c_{2}, \ldots, c_{n}$ and $z_{1}$.

(b) The numbers $x_{1}, x_{2}, \ldots$ satisfy

$x_{n+1}=a_{n} x_{n}+b_{n} \quad(n \geqslant 1),$

where $a_{1}, a_{2}, \ldots$ are given non-zero constants and $b_{1}, b_{2}, \ldots$ are given constants. Let $z_{1}=x_{1}$ and $z_{n+1}=x_{n+1} / U_{n}$, where $U_{n}=a_{1} a_{2} \cdots a_{n}$. Calculate $z_{n+1}-z_{n}$, and hence find $x_{n+1}$ in terms of $x_{1}, b_{1}, \ldots, b_{n}$ and $U_{1}, \ldots, U_{n}$.

comment
• # Paper 2, Section II, $7 \mathrm{C}$

Let $y_{1}$ and $y_{2}$ be two solutions of the differential equation

$y^{\prime \prime}(x)+p(x) y^{\prime}(x)+q(x) y(x)=0, \quad-\infty

where $p$ and $q$ are given. Show, using the Wronskian, that

• either there exist $\alpha$ and $\beta$, not both zero, such that $\alpha y_{1}(x)+\beta y_{2}(x)$ vanishes for all $x$,

• or given $x_{0}, A$ and $B$, there exist $a$ and $b$ such that $y(x)=a y_{1}(x)+b y_{2}(x)$ satisfies the conditions $y\left(x_{0}\right)=A$ and $y^{\prime}\left(x_{0}\right)=B$.

Find power series $y_{1}$ and $y_{2}$ such that an arbitrary solution of the equation

$y^{\prime \prime}(x)=x y(x)$

can be written as a linear combination of $y_{1}$ and $y_{2}$.

comment
• # Paper 2, Section II, C

(a) Consider the system

\begin{aligned} &\frac{d x}{d t}=x(1-x)-x y \\ &\frac{d y}{d t}=\frac{1}{8} y(4 x-1) \end{aligned}

for $x(t) \geqslant 0, y(t) \geqslant 0$. Find the critical points, determine their type and explain, with the help of a diagram, the behaviour of solutions for large positive times $t$.

(b) Consider the system

\begin{aligned} &\frac{d x}{d t}=y+\left(1-x^{2}-y^{2}\right) x \\ &\frac{d y}{d t}=-x+\left(1-x^{2}-y^{2}\right) y \end{aligned}

for $(x(t), y(t)) \in \mathbb{R}^{2}$. Rewrite the system in polar coordinates by setting $x(t)=$ $r(t) \cos \theta(t)$ and $y(t)=r(t) \sin \theta(t)$, and hence describe the behaviour of solutions for large positive and large negative times.

comment
• # Paper 2, Section II, C

The current $I(t)$ at time $t$ in an electrical circuit subject to an applied voltage $V(t)$ obeys the equation

$L \frac{d^{2} I}{d t^{2}}+R \frac{d I}{d t}+\frac{1}{C} I=\frac{d V}{d t}$

where $R, L$ and $C$ are the constant resistance, inductance and capacitance of the circuit with $R \geqslant 0, L>0$ and $C>0$.

(a) In the case $R=0$ and $V(t)=0$, show that there exist time-periodic solutions of frequency $\omega_{0}$, which you should find.

(b) In the case $V(t)=H(t)$, the Heaviside function, calculate, subject to the condition

$R^{2}>\frac{4 L}{C}$

the current for $t \geqslant 0$, assuming it is zero for $t<0$.

(c) If $R>0$ and $V(t)=\sin \omega_{0} t$, where $\omega_{0}$ is as in part (a), show that there is a timeperiodic solution $I_{0}(t)$ of period $T=2 \pi / \omega_{0}$ and calculate its maximum value $I_{M}$.

(i) Calculate the energy dissipated in each period, i.e., the quantity

$D=\int_{0}^{T} R I_{0}(t)^{2} d t$

Show that the quantity defined by

$Q=\frac{2 \pi}{D} \times \frac{L I_{M}^{2}}{2}$

satisfies $Q \omega_{0} R C=1$.

(ii) Write down explicitly the general solution $I(t)$ for all $R>0$, and discuss the relevance of $I_{0}(t)$ to the large time behaviour of $I(t)$.

comment
• # Paper 2, Section II, C

(a) Solve $\frac{d z}{d t}=z^{2}$ subject to $z(0)=z_{0}$. For which $z_{0}$ is the solution finite for all $t \in \mathbb{R}$ ?

Let $a$ be a positive constant. By considering the lines $y=a\left(x-x_{0}\right)$ for constant $x_{0}$, or otherwise, show that any solution of the equation

$\frac{\partial f}{\partial x}+a \frac{\partial f}{\partial y}=0$

is of the form $f(x, y)=F(y-a x)$ for some function $F$.

Solve the equation

$\frac{\partial f}{\partial x}+a \frac{\partial f}{\partial y}=f^{2}$

subject to $f(0, y)=g(y)$ for a given function $g$. For which $g$ is the solution bounded on $\mathbb{R}^{2}$ ?

(b) By means of the change of variables $X=\alpha x+\beta y$ and $T=\gamma x+\delta y$ for appropriate real numbers $\alpha, \beta, \gamma, \delta$, show that the equation

$\frac{\partial^{2} f}{\partial x^{2}}+\frac{\partial^{2} f}{\partial x \partial y}=0$

can be transformed into the wave equation

$\frac{1}{c^{2}} \frac{\partial^{2} F}{\partial T^{2}}-\frac{\partial^{2} F}{\partial X^{2}}=0$

where $F$ is defined by $f(x, y)=F(\alpha x+\beta y, \gamma x+\delta y)$. Hence write down the general solution of $(*)$.

comment

• # Paper 2, Section I, F

Let $X$ and $Y$ be real-valued random variables with joint density function

$f(x, y)= \begin{cases}x e^{-x(y+1)} & \text { if } x \geqslant 0 \text { and } y \geqslant 0 \\ 0 & \text { otherwise. }\end{cases}$

(i) Find the conditional probability density function of $Y$ given $X$.

(ii) Find the expectation of $Y$ given $X$.

comment
• # Paper 2, Section I, F

Let $X$ be a non-negative integer-valued random variable such that $0<\mathbb{E}\left(X^{2}\right)<\infty$.

Prove that

$\frac{\mathbb{E}(X)^{2}}{\mathbb{E}\left(X^{2}\right)} \leqslant \mathbb{P}(X>0) \leqslant \mathbb{E}(X)$

[You may use any standard inequality.]

comment
• # Paper 2, Section II, 10F

(a) For any random variable $X$ and $\lambda>0$ and $t>0$, show that

$\mathbb{P}(X>t) \leqslant \mathbb{E}\left(e^{\lambda X}\right) e^{-\lambda t}$

For a standard normal random variable $X$, compute $\mathbb{E}\left(e^{\lambda X}\right)$ and deduce that

$\mathbb{P}(X>t) \leqslant e^{-\frac{1}{2} t^{2}}$

(b) Let $\mu, \lambda>0, \mu \neq \lambda$. For independent random variables $X$ and $Y$ with distributions $\operatorname{Exp}(\lambda)$ and $\operatorname{Exp}(\mu)$, respectively, compute the probability density functions of $X+Y$ and $\min \{X, Y\}$.

comment
• # Paper 2, Section II, 12F

(a) Let $k \in\{1,2, \ldots\}$. For $j \in\{0, \ldots, k+1\}$, let $D_{j}$ be the first time at which a simple symmetric random walk on $\mathbb{Z}$ with initial position $j$ at time 0 hits 0 or $k+1$. Show $\mathbb{E}\left(D_{j}\right)=j(k+1-j)$. [If you use a recursion relation, you do not need to prove that its solution is unique.]

(b) Let $\left(S_{n}\right)$ be a simple symmetric random walk on $\mathbb{Z}$ starting at 0 at time $n=0$. For $k \in\{1,2, \ldots\}$, let $T_{k}$ be the first time at which $\left(S_{n}\right)$ has visited $k$ distinct vertices. In particular, $T_{1}=0$. Show $\mathbb{E}\left(T_{k+1}-T_{k}\right)=k$ for $k \geqslant 1$. [You may use without proof that, conditional on $S_{T_{k}}=i$, the random variables $\left(S_{T_{k}+n}\right)_{n \geqslant 0}$ have the distribution of a simple symmetric random walk starting at $i$.]

(c) For $n \geqslant 3$, let $\mathbb{Z}_{n}$ be the circle graph consisting of vertices $0, \ldots, n-1$ and edges between $k$ and $k+1$ where $n$ is identified with 0 . Let $\left(Y_{i}\right)$ be a simple random walk on $\mathbb{Z}_{n}$ starting at time 0 from 0 . Thus $Y_{0}=0$ and conditional on $Y_{i}$ the random variable $Y_{i+1}$ is $Y_{i} \pm 1$ with equal probability (identifying $k+n$ with $k$ ).

The cover time $T$ of the simple random walk on $\mathbb{Z}_{n}$ is the first time at which the random walk has visited all vertices. Show that $\mathbb{E}(T)=n(n-1) / 2$.

comment
• # Paper 2, Section II, F

Let $\beta>0$. The Curie-Weiss Model of ferromagnetism is the probability distribution defined as follows. For $n \in \mathbb{N}$, define random variables $S_{1}, \ldots, S_{n}$ with values in $\{\pm 1\}$ such that the probabilities are given by

$\mathbb{P}\left(S_{1}=s_{1}, \ldots, S_{n}=s_{n}\right)=\frac{1}{Z_{n, \beta}} \exp \left(\frac{\beta}{2 n} \sum_{i=1}^{n} \sum_{j=1}^{n} s_{i} s_{j}\right)$

where $Z_{n, \beta}$ is the normalisation constant

$Z_{n, \beta}=\sum_{s_{1} \in\{\pm 1\}} \cdots \sum_{s_{n} \in\{\pm 1\}} \exp \left(\frac{\beta}{2 n} \sum_{i=1}^{n} \sum_{j=1}^{n} s_{i} s_{j}\right)$

(a) Show that $\mathbb{E}\left(S_{i}\right)=0$ for any $i$.

(b) Show that $\mathbb{P}\left(S_{2}=+1 \mid S_{1}=+1\right) \geqslant \mathbb{P}\left(S_{2}=+1\right)$. [You may use $\mathbb{E}\left(S_{i} S_{j}\right) \geqslant 0$ for all $i, j$ without proof. ]

(c) Let $M=\frac{1}{n} \sum_{i=1}^{n} S_{i}$. Show that $M$ takes values in $E_{n}=\left\{-1+\frac{2 k}{n}: k=0, \ldots, n\right\}$, and that for each $m \in E_{n}$ the number of possible values of $\left(S_{1}, \ldots, S_{n}\right)$ such that $M=m$ is

$\frac{n !}{\left(\frac{1+m}{2} n\right) !\left(\frac{1-m}{2} n\right) !}$

Find $\mathbb{P}(M=m)$ for any $m \in E_{n}$.

comment
• # Paper 2, Section II, F

For a positive integer $N, p \in[0,1]$, and $k \in\{0,1, \ldots, N\}$, let

$p_{k}(N, p)=\left(\begin{array}{c} N \\ k \end{array}\right) p^{k}(1-p)^{N-k}$

(a) For fixed $N$ and $p$, show that $p_{k}(N, p)$ is a probability mass function on $\{0,1, \ldots, N\}$ and that the corresponding probability distribution has mean $N p$ and variance $N p(1-p)$.

(b) Let $\lambda>0$. Show that, for any $k \in\{0,1,2, \ldots\}$,

$\lim _{N \rightarrow \infty} p_{k}(N, \lambda / N)=\frac{e^{-\lambda} \lambda^{k}}{k !}$

Show that the right-hand side of $(*)$ is a probability mass function on $\{0,1,2, \ldots\}$.

(c) Let $p \in(0,1)$ and let $a, b \in \mathbb{R}$ with $a. For all $N$, find integers $k_{a}(N)$ and $k_{b}(N)$ such that

$\sum_{k=k_{a}(N)}^{k_{b}(N)} p_{k}(N, p) \rightarrow \frac{1}{\sqrt{2 \pi}} \int_{a}^{b} e^{-\frac{1}{2} x^{2}} d x \quad \text { as } N \rightarrow \infty$

[You may use the Central Limit Theorem.]

comment