• # Paper 2, Section I, $2 \mathrm{C}$

Consider the first order system

$\frac{d \boldsymbol{v}}{d t}-B \boldsymbol{v}=e^{\lambda t} \boldsymbol{x}$

to be solved for $\boldsymbol{v}(t)=\left(v_{1}(t), v_{2}(t), \ldots, v_{n}(t)\right) \in \mathbb{R}^{n}$, where the $n \times n$ matrix $B, \lambda \in \mathbb{R}$ and $\boldsymbol{x} \in \mathbb{R}^{n}$ are all independent of time. Show that if $\lambda$ is not an eigenvalue of $B$ then there is a solution of the form $\boldsymbol{v}(t)=e^{\lambda t} \boldsymbol{u}$, with $\boldsymbol{u}$ constant.

For $n=2$, given

$B=\left(\begin{array}{ll} 0 & 3 \\ 1 & 0 \end{array}\right) \quad \lambda=2 \quad \text { and } \boldsymbol{x}=\left(\begin{array}{l} 0 \\ 1 \end{array}\right)$

find the general solution to (1).

comment
• # Paper 2, Section I, C

The function $y(x)$ satisfies the inhomogeneous second-order linear differential equation

$y^{\prime \prime}-2 y^{\prime}-3 y=-16 x e^{-x}$

Find the solution that satisfies the conditions that $y(0)=1$ and $y(x)$ is bounded as $x \rightarrow \infty$.

comment
• # Paper 2, Section II, $5 \mathrm{C}$

Consider the problem of solving

$\frac{d^{2} y}{d t^{2}}=t$

subject to the initial conditions $y(0)=\frac{d y}{d t}(0)=0$ using a discrete approach where $y$ is computed at discrete times, $y_{n}=y\left(t_{n}\right)$ where $t_{n}=n h(n=-1,0,1, \ldots, N)$ and $0

(a) By using Taylor expansions around $t_{n}$, derive the centred-difference formula

$\frac{y_{n+1}-2 y_{n}+y_{n-1}}{h^{2}}=\left.\frac{d^{2} y}{d t^{2}}\right|_{t=t_{n}}+O\left(h^{\alpha}\right)$

where the value of $\alpha$ should be found.

(b) Find the general solution of $y_{n+1}-2 y_{n}+y_{n-1}=0$ and show that this is the discrete version of the corresponding general solution to $\frac{d^{2} y}{d t^{2}}=0$.

(c) The fully discretized version of the differential equation (1) is

$\frac{y_{n+1}-2 y_{n}+y_{n-1}}{h^{2}}=n h \quad \text { for } \quad n=0, \ldots, N-1$

By finding a particular solution first, write down the general solution to the difference equation (2). For the solution which satisfies the discretized initial conditions $y_{0}=0$ and $y_{-1}=y_{1}$, find the error in $y_{N}$ in terms of $h$ only.

comment
• # Paper 2, Section II, $6 \mathrm{C}$

Find all power series solutions of the form $y=\sum_{n=0}^{\infty} a_{n} x^{n}$ to the equation

$\left(1-x^{2}\right) y^{\prime \prime}-x y^{\prime}+\lambda^{2} y=0$

for $\lambda$ a real constant. [It is sufficient to give a recurrence relationship between coefficients.]

Impose the condition $y^{\prime}(0)=0$ and determine those values of $\lambda$ for which your power series gives polynomial solutions (i.e., $a_{n}=0$ for $n$ sufficiently large). Give the values of $\lambda$ for which the corresponding polynomials have degree less than 6 , and compute these polynomials. Hence, or otherwise, find a polynomial solution of

$\left(1-x^{2}\right) y^{\prime \prime}-x y^{\prime}+y=8 x^{4}-3$

satisfying $y^{\prime}(0)=0$.

comment
• # Paper 2, Section II, C

Consider the nonlinear system

\begin{aligned} &\dot{x}=y-2 y^{3} \\ &\dot{y}=-x \end{aligned}

(a) Show that $H=H(x, y)=x^{2}+y^{2}-y^{4}$ is a constant of the motion.

(b) Find all the critical points of the system and analyse their stability. Sketch the phase portrait including the special contours with value $H(x, y)=\frac{1}{4}$.

(c) Find an explicit expression for $y=y(t)$ in the solution which satisfies $(x, y)=\left(\frac{1}{2}, 0\right)$ at $t=0$. At what time does it reach the point $(x, y)=\left(\frac{1}{4},-\frac{1}{2}\right) ?$

comment
• # Paper 2, Section II, C

Two cups of tea at temperatures $T_{1}(t)$ and $T_{2}(t)$ cool in a room at ambient constant temperature $T_{\infty}$. Initially $T_{1}(0)=T_{2}(0)=T_{0}>T_{\infty}$.

Cup 1 has cool milk added instantaneously at $t=1$ and then hot water added at a constant rate after $t=2$ which is modelled as follows

$\frac{d T_{1}}{d t}=-a\left(T_{1}-T_{\infty}\right)-\delta(t-1)+H(t-2)$

whereas cup 2 is left undisturbed and evolves as follows

$\frac{d T_{2}}{d t}=-a\left(T_{2}-T_{\infty}\right)$

where $\delta(t)$ and $H(t)$ are the Dirac delta and Heaviside functions respectively, and $a$ is a positive constant.

(a) Derive expressions for $T_{1}(t)$ when $0 and for $T_{2}(t)$ when $t>0$.

(b) Show for $1 that

$T_{1}(t)=T_{\infty}+\left(T_{0}-T_{\infty}-e^{a}\right) e^{-a t}$

(c) Derive an expression for $T_{1}(t)$ for $t>2$.

(d) At what time $t^{*}$ is $T_{1}=T_{2}$ ?

(e) Find how $t^{*}$ behaves for $a \rightarrow 0$ and explain your result.

comment

• # Paper 2, Section I, 3F

(a) Prove that $\log (n !) \sim n \log n$ as $n \rightarrow \infty$.

(b) State Stirling's approximation for $n$ !.

(c) A school party of $n$ boys and $n$ girls travel on a red bus and a green bus. Each bus can hold $n$ children. The children are distributed at random between the buses.

Let $A_{n}$ be the event that the boys all travel on the red bus and the girls all travel on the green bus. Show that

$\mathbb{P}\left(A_{n}\right) \sim \frac{\sqrt{\pi n}}{4^{n}} \text { as } n \rightarrow \infty$

comment
• # Paper 2, Section I, F

Let $X$ and $Y$ be independent exponential random variables each with parameter 1 . Write down the joint density function of $X$ and $Y$.

Let $U=6 X+8 Y$ and $V=2 X+3 Y$. Find the joint density function of $U$ and $V$.

Are $U$ and $V$ independent? Briefly justify your answer.

comment
• # Paper 2, Section II, F

Let $A_{1}, A_{2}, \ldots, A_{n}$ be events in some probability space. Let $X$ be the number of $A_{i}$ that occur (so $X$ is a random variable). Show that

$\mathbb{E}(X)=\sum_{i=1}^{n} \mathbb{P}\left(A_{i}\right)$

and

$\operatorname{Var}(X)=\sum_{i=1}^{n} \sum_{j=1}^{n}\left(\mathbb{P}\left(A_{i} \cap A_{j}\right)-\mathbb{P}\left(A_{i}\right) \mathbb{P}\left(A_{j}\right)\right)$

[Hint: Write $X=\sum_{i=1}^{n} X_{i}$ where $X_{i}=\left\{\begin{array}{ll}1 & \text { if } A_{i} \text { occurs } \\ 0 & \text { if not }\end{array}\right.$.]

A collection of $n$ lightbulbs are arranged in a circle. Each bulb is on independently with probability $p$. Let $X$ be the number of bulbs such that both that bulb and the next bulb clockwise are on. Find $\mathbb{E}(X)$ and $\operatorname{Var}(X)$.

Let $B$ be the event that there is at least one pair of adjacent bulbs that are both on.

Use Markov's inequality to show that if $p=n^{-0.6}$ then $\mathbb{P}(B) \rightarrow 0$ as $n \rightarrow \infty$.

Use Chebychev's inequality to show that if $p=n^{-0.4}$ then $\mathbb{P}(B) \rightarrow 1$ as $n \rightarrow \infty$.

comment
• # Paper 2, Section II, F

Recall that a random variable $X$ in $\mathbb{R}^{2}$ is bivariate normal or Gaussian if $u^{T} X$ is normal for all $u \in \mathbb{R}^{2}$. Let $X=\left(\begin{array}{c}X_{1} \\ X_{2}\end{array}\right)$ be bivariate normal.

(a) (i) Show that if $A$ is a $2 \times 2$ real matrix then $A X$ is bivariate normal.

(ii) Let $\mu=\mathbb{E}(X)$ and $V=\operatorname{Var}(X)=\mathbb{E}\left[(X-\mu)(X-\mu)^{T}\right]$. Find the moment generating function $M_{X}(\lambda)=\mathbb{E}\left(e^{\lambda^{T}} X\right)$ of $X$ and deduce that the distribution of a bivariate normal random variable $X$ is uniquely determined by $\mu$ and $V$.

(iii) Let $\mu_{i}=\mathbb{E}\left(X_{i}\right)$ and $\sigma_{i}^{2}=\operatorname{Var}\left(X_{i}\right)$ for $i=1,2$. Let $\rho=\frac{\operatorname{Cov}\left(X_{1}, X_{2}\right)}{\sigma_{1} \sigma_{2}}$ be the correlation of $X_{1}$ and $X_{2}$. Write down $V$ in terms of some or all of $\mu_{1}, \mu_{2}, \sigma_{1}, \sigma_{2}$ and $\rho$. If $\operatorname{Cov}\left(X_{1}, X_{2}\right)=0$, why must $X_{1}$ and $X_{2}$ be independent?

For each $a \in \mathbb{R}$, find $\operatorname{Cov}\left(X_{1}, X_{2}-a X_{1}\right)$. Hence show that $X_{2}=a X_{1}+Y$ for some normal random variable $Y$ in $\mathbb{R}$ that is independent of $X_{1}$ and some $a \in \mathbb{R}$ that should be specified.

(b) A certain species of East Anglian goblin has left arm of mean length $100 \mathrm{~cm}$ with standard deviation $1 \mathrm{~cm}$, and right arm of mean length $102 \mathrm{~cm}$ with standard deviation $2 \mathrm{~cm}$. The correlation of left- and right-arm-length of a goblin is $\frac{1}{2}$. You may assume that the distribution of left- and right-arm-lengths can be modelled by a bivariate normal distribution. What is the probability that a randomly selected goblin has longer right arm than left arm?

[You may give your answer in terms of the distribution function $\Phi$ of a $N(0,1)$ random variable $Z$. That is, $\Phi(t)=\mathbb{P}(Z \leqslant t)$.J

comment
• # Paper 2, Section II, F

Let $m$ and $n$ be positive integers with $n>m>0$ and let $p \in(0,1)$ be a real number. A random walk on the integers starts at $m$. At each step, the walk moves up 1 with probability $p$ and down 1 with probability $q=1-p$. Find, with proof, the probability that the walk hits $n$ before it hits 0 .

Patricia owes a very large sum $£ 2(N$ !) of money to a member of a violent criminal gang. She must return the money this evening to avoid terrible consequences but she only has $£ N$ !. She goes to a casino and plays a game with the probability of her winning being $\frac{18}{37}$. If she bets $£ a$ on the game and wins then her $£ a$ is returned along with a further $£ a$; if she loses then her $£ a$ is lost.

The rules of the casino allow Patricia to play the game repeatedly until she runs out of money. She may choose the amount $£ a$ that she bets to be any integer a with $1 \leqslant a \leqslant N$, but it must be the same amount each time. What choice of $a$ would be best and why?

What choice of $a$ would be best, and why, if instead the probability of her winning the game is $\frac{19}{37}$ ?

comment
• # Paper 2, Section II, F

(a) State the axioms that must be satisfied by a probability measure $\mathbb{P}$ on a probability space $\Omega$.

Let $A$ and $B$ be events with $\mathbb{P}(B)>0$. Define the conditional probability $\mathbb{P}(A \mid B)$.

Let $B_{1}, B_{2}, \ldots$ be pairwise disjoint events with $\mathbb{P}\left(B_{i}\right)>0$ for all $i$ and $\Omega=\cup_{i=1}^{\infty} B_{i}$. Starting from the axioms, show that

$\mathbb{P}(A)=\sum_{i=1}^{\infty} \mathbb{P}\left(A \mid B_{i}\right) \mathbb{P}\left(B_{i}\right)$

and deduce Bayes' theorem.

(b) Two identical urns contain white balls and black balls. Urn I contains 45 white balls and 30 black balls. Urn II contains 12 white balls and 36 black balls. You do not know which urn is which.

(i) Suppose you select an urn and draw one ball at random from it. The ball is white. What is the probability that you selected Urn I?

(ii) Suppose instead you draw one ball at random from each urn. One of the balls is white and one is black. What is the probability that the white ball came from Urn I?

(c) Now suppose there are $n$ identical urns containing white balls and black balls, and again you do not know which urn is which. Each urn contains 1 white ball. The $i$ th urn contains $2^{i}-1$ black balls $(1 \leqslant i \leqslant n)$. You select an urn and draw one ball at random from it. The ball is white. Let $p(n)$ be the probability that if you replace this ball and again draw a ball at random from the same urn then the ball drawn on the second occasion is also white. Show that $p(n) \rightarrow \frac{1}{3}$ as $n \rightarrow \infty$

comment