• # Paper 2, Section I, B

Show that for given $P(x, y), Q(x, y)$ there is a function $F(x, y)$ such that, for any function $y(x)$,

$P(x, y)+Q(x, y) \frac{d y}{d x}=\frac{d}{d x} F(x, y)$

if and only if

$\frac{\partial P}{\partial y}=\frac{\partial Q}{\partial x}$

Now solve the equation

$(2 y+3 x) \frac{d y}{d x}+4 x^{3}+3 y=0$

comment
• # Paper 2, Section I, B

Consider the following difference equation for real $u_{n}$ :

$u_{n+1}=a u_{n}\left(1-u_{n}^{2}\right)$

where $a$ is a real constant.

For $-\infty find the steady-state solutions, i.e. those with $u_{n+1}=u_{n}$ for all $n$, and determine their stability, making it clear how the number of solutions and the stability properties vary with $a$. [You need not consider in detail particular values of $a$ which separate intervals with different stability properties.]

comment
• # Paper 2, Section II, B

The function $u(x, y)$ satisfies the partial differential equation

$a \frac{\partial^{2} u}{\partial x^{2}}+b \frac{\partial^{2} u}{\partial x \partial y}+c \frac{\partial^{2} u}{\partial y^{2}}=0$

where $a, b$ and $c$ are non-zero constants.

Defining the variables $\xi=\alpha x+y$ and $\eta=\beta x+y$, where $\alpha$ and $\beta$ are constants, and writing $v(\xi, \eta)=u(x, y)$ show that

$a \frac{\partial^{2} u}{\partial x^{2}}+b \frac{\partial^{2} u}{\partial x \partial y}+c \frac{\partial^{2} u}{\partial y^{2}}=A(\alpha, \beta) \frac{\partial^{2} v}{\partial \xi^{2}}+B(\alpha, \beta) \frac{\partial^{2} v}{\partial \xi \partial \eta}+C(\alpha, \beta) \frac{\partial^{2} v}{\partial \eta^{2}},$

where you should determine the functions $A(\alpha, \beta), B(\alpha, \beta)$ and $C(\alpha, \beta)$.

If the quadratic $a s^{2}+b s+c=0$ has distinct real roots then show that $\alpha$ and $\beta$ can be chosen such that $A(\alpha, \beta)=C(\alpha, \beta)=0$ and $B(\alpha, \beta) \neq 0$.

If the quadratic $a s^{2}+b s+c=0$ has a repeated root then show that $\alpha$ and $\beta$ can be chosen such that $A(\alpha, \beta)=B(\alpha, \beta)=0$ and $C(\alpha, \beta) \neq 0$.

Hence find the general solutions of the equations

$\frac{\partial^{2} u}{\partial x^{2}}+3 \frac{\partial^{2} u}{\partial x \partial y}+2 \frac{\partial^{2} u}{\partial y^{2}}=0$

and

$\frac{\partial^{2} u}{\partial x^{2}}+2 \frac{\partial^{2} u}{\partial x \partial y}+\frac{\partial^{2} u}{\partial y^{2}}=0$

comment
• # Paper 2, Section II, B

By choosing a suitable basis, solve the equation

$\left(\begin{array}{ll} 1 & 2 \\ 1 & 0 \end{array}\right)\left(\begin{array}{l} \dot{x} \\ \dot{y} \end{array}\right)+\left(\begin{array}{cc} -2 & 5 \\ 2 & -1 \end{array}\right)\left(\begin{array}{l} x \\ y \end{array}\right)=e^{-4 t}\left(\begin{array}{c} 3 b \\ 2 \end{array}\right)+e^{-t}\left(\begin{array}{c} -3 \\ c-1 \end{array}\right)$

subject to the initial conditions $x(0)=0, y(0)=0$.

Explain briefly what happens in the cases $b=2$ or $c=2$.

comment
• # Paper 2, Section II, B

The temperature $T$ in an oven is controlled by a heater which provides heat at rate $Q(t)$. The temperature of a pizza in the oven is $U$. Room temperature is the constant value $T_{r}$.

$T$ and $U$ satisfy the coupled differential equations

\begin{aligned} \frac{d T}{d t} &=-a\left(T-T_{r}\right)+Q(t) \\ \frac{d U}{d t} &=-b(U-T) \end{aligned}

where $a$ and $b$ are positive constants. Briefly explain the various terms appearing in the above equations.

Heating may be provided by a short-lived pulse at $t=0$, with $Q(t)=Q_{1}(t)=\delta(t)$ or by constant heating over a finite period $0, with $Q(t)=Q_{2}(t)=\tau^{-1}(H(t)-H(t-$ $\tau)$, where $\delta(t)$ and $H(t)$ are respectively the Dirac delta function and the Heaviside step function. Again briefly, explain how the given formulae for $Q_{1}(t)$ and $Q_{2}(t)$ are consistent with their description and why the total heat supplied by the two heating protocols is the same.

For $t<0, T=U=T_{r}$. Find the solutions for $T(t)$ and $U(t)$ for $t>0$, for each of $Q(t)=Q_{1}(t)$ and $Q(t)=Q_{2}(t)$, denoted respectively by $T_{1}(t)$ and $U_{1}(t)$, and $T_{2}(t)$ and $U_{2}(t)$. Explain clearly any assumptions that you make about continuity of the solutions in time.

Show that the solutions $T_{2}(t)$ and $U_{2}(t)$ tend respectively to $T_{1}(t)$ and $U_{1}(t)$ in the limit as $\tau \rightarrow 0$ and explain why.

comment
• # Paper 2, Section II, B

Consider the differential equation

$x^{2} \frac{d^{2} y}{d x^{2}}+x \frac{d y}{d x}-\left(x^{2}+\alpha^{2}\right) y=0$

What values of $x$ are ordinary points of the differential equation? What values of $x$ are singular points of the differential equation, and are they regular singular points or irregular singular points? Give clear definitions of these terms to support your answers.

For $\alpha$ not equal to an integer there are two linearly independent power series solutions about $x=0$. Give the forms of the two power series and the recurrence relations that specify the relation between successive coefficients. Give explicitly the first three terms in each power series.

For $\alpha$ equal to an integer explain carefully why the forms you have specified do not give two linearly independent power series solutions. Show that for such values of $\alpha$ there is (up to multiplication by a constant) one power series solution, and give the recurrence relation between coefficients. Give explicitly the first three terms.

If $y_{1}(x)$ is a solution of the above second-order differential equation then

$y_{2}(x)=y_{1}(x) \int_{c}^{x} \frac{1}{s\left[y_{1}(s)\right]^{2}} d s$

where $c$ is an arbitrarily chosen constant, is a second solution that is linearly independent of $y_{1}(x)$. For the case $\alpha=1$, taking $y_{1}(x)$ to be a power series, explain why the second solution $y_{2}(x)$ is not a power series.

[You may assume that any power series you use are convergent.]

comment

• # Paper 2, Section I, F

(a) State the Cauchy-Schwarz inequality and Markov's inequality. State and prove Jensen's inequality.

(b) For a discrete random variable $X$, show that $\operatorname{Var}(X)=0$ implies that $X$ is constant, i.e. there is $x \in \mathbb{R}$ such that $\mathbb{P}(X=x)=1$.

comment
• # Paper 2, Section I, F

Let $X$ and $Y$ be independent Poisson random variables with parameters $\lambda$ and $\mu$ respectively.

(i) Show that $X+Y$ is Poisson with parameter $\lambda+\mu$.

(ii) Show that the conditional distribution of $X$ given $X+Y=n$ is binomial, and find its parameters.

comment
• # Paper 2, Section II, 10F

(a) Let $X$ and $Y$ be independent random variables taking values $\pm 1$, each with probability $\frac{1}{2}$, and let $Z=X Y$. Show that $X, Y$ and $Z$ are pairwise independent. Are they independent?

(b) Let $X$ and $Y$ be discrete random variables with mean 0 , variance 1 , covariance $\rho$. Show that $\mathbb{E} \max \left\{X^{2}, Y^{2}\right\} \leqslant 1+\sqrt{1-\rho^{2}}$.

(c) Let $X_{1}, X_{2}, X_{3}$ be discrete random variables. Writing $a_{i j}=\mathbb{P}\left(X_{i}>X_{j}\right)$, show that $\min \left\{a_{12}, a_{23}, a_{31}\right\} \leqslant \frac{2}{3}$.

comment
• # Paper 2, Section II, F

For a symmetric simple random walk $\left(X_{n}\right)$ on $\mathbb{Z}$ starting at 0 , let $M_{n}=\max _{i \leqslant n} X_{i}$.

(i) For $m \geqslant 0$ and $x \in \mathbb{Z}$, show that

$\mathbb{P}\left(M_{n} \geqslant m, X_{n}=x\right)= \begin{cases}\mathbb{P}\left(X_{n}=x\right) & \text { if } x \geqslant m \\ \mathbb{P}\left(X_{n}=2 m-x\right) & \text { if } x

(ii) For $m \geqslant 0$, show that $\mathbb{P}\left(M_{n} \geqslant m\right)=\mathbb{P}\left(X_{n}=m\right)+2 \sum_{x>m} \mathbb{P}\left(X_{n}=x\right)$ and that

$\mathbb{P}\left(M_{n}=m\right)=\mathbb{P}\left(X_{n}=m\right)+\mathbb{P}\left(X_{n}=m+1\right)$

(iii) Prove that $\mathbb{E}\left(M_{n}^{2}\right)<\mathbb{E}\left(X_{n}^{2}\right)$.

comment
• # Paper 2, Section II, F

(a) Consider a Galton-Watson process $\left(X_{n}\right)$. Prove that the extinction probability $q$ is the smallest non-negative solution of the equation $q=F(q)$ where $F(t)=\mathbb{E}\left(t^{X_{1}}\right)$. [You should prove any properties of Galton-Watson processes that you use.]

In the case of a Galton-Watson process with

$\mathbb{P}\left(X_{1}=1\right)=1 / 4, \quad \mathbb{P}\left(X_{1}=3\right)=3 / 4$

find the mean population size and compute the extinction probability.

(b) For each $n \in \mathbb{N}$, let $Y_{n}$ be a random variable with distribution $\operatorname{Poisson}(n)$. Show that

$\frac{Y_{n}-n}{\sqrt{n}} \rightarrow Z$

in distribution, where $Z$ is a standard normal random variable.

Deduce that

$\lim _{n \rightarrow \infty} e^{-n} \sum_{k=0}^{n} \frac{n^{k}}{k !}=\frac{1}{2}$

comment
• # Paper 2, Section II, F

(a) Let $Y$ and $Z$ be independent discrete random variables taking values in sets $S_{1}$ and $S_{2}$ respectively, and let $F: S_{1} \times S_{2} \rightarrow \mathbb{R}$ be a function.

Let $E(z)=\mathbb{E} F(Y, z)$. Show that

$\mathbb{E} E(Z)=\mathbb{E} F(Y, Z) .$

Let $V(z)=\mathbb{E}\left(F(Y, z)^{2}\right)-(\mathbb{E} F(Y, z))^{2}$. Show that

$\operatorname{Var} F(Y, Z)=\mathbb{E} V(Z)+\operatorname{Var} E(Z)$

(b) Let $X_{1}, \ldots, X_{n}$ be independent Bernoulli $(p)$ random variables. For any function $F:\{0,1\} \rightarrow \mathbb{R}$, show that

$\operatorname{Var} F\left(X_{1}\right)=p(1-p)(F(1)-F(0))^{2}$

Let $\{0,1\}^{n}$ denote the set of all $0-1$ sequences of length $n$. By induction, or otherwise, show that for any function $F:\{0,1\}^{n} \rightarrow \mathbb{R}$,

$\operatorname{Var} F(X) \leqslant p(1-p) \sum_{i=1}^{n} \mathbb{E}\left(\left(F(X)-F\left(X^{i}\right)\right)^{2}\right)$

where $X=\left(X_{1}, \ldots, X_{n}\right)$ and $X^{i}=\left(X_{1}, \ldots, X_{i-1}, 1-X_{i}, X_{i+1}, \ldots, X_{n}\right)$.

comment