• 2.I.1D

Solve the equation

$\ddot{y}+\dot{y}-2 y=e^{-t}$

subject to the conditions $y(t)=\dot{y}(t)=0$ at $t=0$. Solve the equation

$\ddot{y}+\dot{y}-2 y=e^{t}$

subject to the same conditions $y(t)=\dot{y}(t)=0$ at $t=0$.

comment
• 2.I.2D

Consider the equation

$\frac{d y}{d x}=x\left(\frac{1-y^{2}}{1-x^{2}}\right)^{1 / 2}$

where the positive square root is taken, within the square $\mathcal{S}: 0 \leqslant x<1,0 \leqslant y \leqslant 1$. Find the solution that begins at $x=y=0$. Sketch the corresponding solution curve, commenting on how its tangent behaves near each extremity. By inspection of the righthand side of $(*)$, or otherwise, roughly sketch, using small line segments, the directions of flow throughout the square $\mathcal{S}$.

comment
• 2.II.5D

Explain what is meant by an integrating factor for an equation of the form

$\frac{d y}{d x}+f(x, y)=0$

Show that $2 y e^{x}$ is an integrating factor for

$\frac{d y}{d x}+\frac{2 x+x^{2}+y^{2}}{2 y}=0$

and find the solution $y=y(x)$ such that $y(0)=a$, for given $a>0$.

Show that $2 x+x^{2} \geqslant-1$ for all $x$ and hence that

$\frac{d y}{d x} \leqslant \frac{1-y^{2}}{2 y}$

For a solution with $a \geqslant 1$, show graphically, by considering the sign of $d y / d x$ first for $x=0$ and then for $x<0$, that $d y / d x<0$ for all $x \leqslant 0$.

Sketch the solution for the case $a=1$, and show that property that $d y / d x \rightarrow-\infty$ both as $x \rightarrow-\infty$ and as $x \rightarrow b$ from below, where $b \approx 0.7035$ is the positive number that satisfies $b^{2}=e^{-b}$.

[Do not consider the range $x \geqslant b$.]

comment
• 2.II.6D

Solve the differential equation

$\frac{d y}{d t}=r y(1-a y)$

for the general initial condition $y=y_{0}$ at $t=0$, where $r, a$, and $y_{0}$ are positive constants. Deduce that the equilibria at $y=a^{-1}$ and $y=0$ are stable and unstable, respectively.

By using the approximate finite-difference formula

$\frac{d y}{d t}=\frac{y_{n+1}-y_{n}}{\delta t}$

for the derivative of $y$ at $t=n \delta t$, where $\delta t$ is a positive constant and $y_{n}=y(n \delta t)$, show that the differential equation when thus approximated becomes the difference equation

$u_{n+1}=\lambda\left(1-u_{n}\right) u_{n},$

where $\lambda=1+r \delta t>1$ and where $u_{n}=\lambda^{-1} a(\lambda-1) y_{n}$. Find the two equilibria and, by linearizing the equation about them or otherwise, show that one is always unstable (given that $\lambda>1$ ) and that the other is stable or unstable according as $\lambda<3$ or $\lambda>3$. Show that this last instability is oscillatory with period $2 \delta t$. Why does this last instability have no counterpart for the differential equation? Show graphically how this instability can equilibrate to a periodic, finite-amplitude oscillation when $\lambda=3.2$.

comment
• 2.II.7D

The homogeneous equation

$\ddot{y}+p(t) \dot{y}+q(t) y=0$

has non-constant, non-singular coefficients $p(t)$ and $q(t)$. Two solutions of the equation, $y(t)=y_{1}(t)$ and $y(t)=y_{2}(t)$, are given. The solutions are known to be such that the determinant

$W(t)=\left|\begin{array}{ll} y_{1} & y_{2} \\ \dot{y}_{1} & \dot{y}_{2} \end{array}\right|$

is non-zero for all $t$. Define what is meant by linear dependence, and show that the two given solutions are linearly independent. Show also that

$W(t) \propto \exp \left(-\int^{t} p(s) d s\right) .$

In the corresponding inhomogeneous equation

$\ddot{y}+p(t) \dot{y}+q(t) y=f(t)$

the right-hand side $f(t)$ is a prescribed forcing function. Construct a particular integral of this inhomogeneous equation in the form

$y(t)=a_{1}(t) y_{1}(t)+a_{2}(t) y_{2}(t),$

where the two functions $a_{i}(t)$ are to be determined such that

$y_{1}(t) \dot{a}_{1}(t)+y_{2}(t) \dot{a}_{2}(t)=0$

for all $t$. Express your result for the functions $a_{i}(t)$ in terms of integrals of the functions $f(t) y_{1}(t) / W(t)$ and $f(t) y_{2}(t) / W(t)$.

Consider the case in which $p(t)=0$ for all $t$ and $q(t)$ is a positive constant, $q=\omega^{2}$ say, and in which the forcing $f(t)=\sin (\omega t)$. Show that in this case $y_{1}(t)$ and $y_{2}(t)$ can be taken as $\cos (\omega t)$ and $\sin (\omega t)$ respectively. Evaluate $f(t) y_{1}(t) / W(t)$ and $f(t) y_{2}(t) / W(t)$ and show that, as $t \rightarrow \infty$, one of the $a_{i}(t)$ increases in magnitude like a power of $t$ to be determined.

comment
• 2.II.8D

For any solution of the equations

\begin{aligned} &\dot{x}=\alpha x-y+y^{3} \quad(\alpha \text { constant }) \\ &\dot{y}=-x \end{aligned}

show that

$\frac{d}{d t}\left(x^{2}-y^{2}+\frac{1}{2} y^{4}\right)=2 \alpha x^{2} .$

What does this imply about the behaviour of phase-plane trajectories at large distances from the origin as $t \rightarrow \infty$, in the case $\alpha=0$ ? Give brief reasoning but do not try to find explicit solutions.

Analyse the properties of the critical points and sketch the phase portrait (a) in the case $\alpha=0$, (b) in the case $\alpha=0.1$, and (c) in the case $\alpha=-0.1$.

comment

• 2.I.3F

Define the indicator function $I_{A}$ of an event $A$.

Let $I_{i}$ be the indicator function of the event $A_{i}, 1 \leq i \leq n$, and let $N=\sum_{1}^{n} I_{i}$ be the number of values of $i$ such that $A_{i}$ occurs. Show that $E(N)=\sum_{i} p_{i}$ where $p_{i}=P\left(A_{i}\right)$, and find $\operatorname{var}(N)$ in terms of the quantities $p_{i j}=P\left(A_{i} \cap A_{j}\right)$.

Using Chebyshev's inequality or otherwise, show that

$P(N=0) \leq \frac{\operatorname{var}(N)}{\{E(N)\}^{2}}$

comment
• 2.I.4F

A coin shows heads with probability $p$ on each toss. Let $\pi_{n}$ be the probability that the number of heads after $n$ tosses is even. Show carefully that $\pi_{n+1}=(1-p) \pi_{n}+p\left(1-\pi_{n}\right)$, $n \geq 1$, and hence find $\pi_{n}$. [The number 0 is even.]

comment
• 2.II.10F

There is a random number $N$ of foreign objects in my soup, with mean $\mu$ and finite variance. Each object is a fly with probability $p$, and otherwise is a spider; different objects have independent types. Let $F$ be the number of flies and $S$ the number of spiders.

(a) Show that $G_{F}(s)=G_{N}(p s+1-p) .\left[G_{X}\right.$ denotes the probability generating function of a random variable $X$. You should present a clear statement of any general result used.]

(b) Suppose $N$ has the Poisson distribution with parameter $\mu$. Show that $F$ has the Poisson distribution with parameter $\mu p$, and that $F$ and $S$ are independent.

(c) Let $p=\frac{1}{2}$ and suppose that $F$ and $S$ are independent. [You are given nothing about the distribution of $N$.] Show that $G_{N}(s)=G_{N}\left(\frac{1}{2}(1+s)\right)^{2}$. By working with the function $H(s)=G_{N}(1-s)$ or otherwise, deduce that $N$ has the Poisson distribution. [You may assume that $\left(1+\frac{x}{n}+\mathrm{o}\left(n^{-1}\right)\right)^{n} \rightarrow e^{x}$ as $n \rightarrow \infty$.]

comment
• 2.II.11F

Let $X, Y, Z$ be independent random variables each with the uniform distribution on the interval $[0,1]$.

(a) Show that $X+Y$ has density function

$f_{X+Y}(u)= \begin{cases}u & \text { if } 0 \leq u \leq 1 \\ 2-u & \text { if } 1 \leq u \leq 2 \\ 0 & \text { otherwise }\end{cases}$

(b) Show that $P(Z>X+Y)=\frac{1}{6}$.

(c) You are provided with three rods of respective lengths $X, Y, Z$. Show that the probability that these rods may be used to form the sides of a triangle is $\frac{1}{2}$.

(d) Find the density function $f_{X+Y+Z}(s)$ of $X+Y+Z$ for $0 \leqslant s \leqslant 1$. Let $W$ be uniformly distributed on $[0,1]$, and independent of $X, Y, Z$. Show that the probability that rods of lengths $W, X, Y, Z$ may be used to form the sides of a quadrilateral is $\frac{5}{6}$.

comment
• 2.II.12F

(a) Explain what is meant by the term 'branching process'.

(b) Let $X_{n}$ be the size of the $n$th generation of a branching process in which each family size has probability generating function $G$, and assume that $X_{0}=1$. Show that the probability generating function $G_{n}$ of $X_{n}$ satisfies $G_{n+1}(s)=G_{n}(G(s))$ for $n \geq 1$.

(c) Show that $G(s)=1-\alpha(1-s)^{\beta}$ is the probability generating function of a non-negative integer-valued random variable when $\alpha, \beta \in(0,1)$, and find $G_{n}$ explicitly when $G$ is thus given.

(d) Find the probability that $X_{n}=0$, and show that it converges as $n \rightarrow \infty$ to $1-\alpha^{1 /(1-\beta)}$. Explain carefully why this implies that the probability of ultimate extinction equals $1-\alpha^{1 /(1-\beta)}$.

comment
• 2.II.9F

(a) Define the conditional probability $P(A \mid B)$ of the event $A$ given the event $B$. Let $\left\{B_{i}: 1 \leq i \leq n\right\}$ be a partition of the sample space $\Omega$ such that $P\left(B_{i}\right)>0$ for all $i$. Show that, if $P(A)>0$,

$P\left(B_{i} \mid A\right)=\frac{P\left(A \mid B_{i}\right) P\left(B_{i}\right)}{\sum_{j} P\left(A \mid B_{j}\right) P\left(B_{j}\right)} .$

(b) There are $n$ urns, the $r$ th of which contains $r-1$ red balls and $n-r$ blue balls. You pick an urn (uniformly) at random and remove two balls without replacement. Find the probability that the first ball is blue, and the conditional probability that the second ball is blue given that the first is blue. [You may assume that $\sum_{i=1}^{n-1} i(i-1)=\frac{1}{3} n(n-1)(n-2)$.]

(c) What is meant by saying that two events $A$ and $B$ are independent?

(d) Two fair dice are rolled. Let $A_{s}$ be the event that the sum of the numbers shown is $s$, and let $B_{i}$ be the event that the first die shows $i$. For what values of $s$ and $i$ are the two events $A_{s}, B_{i}$ independent?

comment