• # 2.I.1B

Solve the initial value problem

$\frac{d x}{d t}=x(1-x), \quad x(0)=x_{0},$

and sketch the phase portrait. Describe the behaviour as $t \rightarrow+\infty$ and as $t \rightarrow-\infty$ of solutions with initial value satisfying $0.

comment
• # 2.I.2B

Consider the first order system

$\frac{d \mathbf{x}}{d t}-A \mathbf{x}=e^{\lambda t} \mathbf{v}$

to be solved for $\mathbf{x}(t)=\left(x_{1}(t), x_{2}(t), \ldots, x_{n}(t)\right) \in \mathbb{R}^{n}$, where $A$ is an $n \times n$ matrix, $\lambda \in \mathbb{R}$ and $\mathbf{v} \in \mathbb{R}^{n}$. Show that if $\lambda$ is not an eigenvalue of $A$ there is a solution of the form $\mathbf{x}(t)=e^{\lambda t} \mathbf{u}$. For $n=2$, given

$A=\left(\begin{array}{ll} 0 & 1 \\ 0 & 0 \end{array}\right), \quad \lambda=1, \quad \text { and } \quad \mathbf{v}=\left(\begin{array}{l} 1 \\ 1 \end{array}\right)$

find this solution.

comment
• # 2.II.5B

Find the general solution of the system

\begin{aligned} &\frac{d x}{d t}=5 x+3 y+e^{2 t} \\ &\frac{d y}{d t}=2 x+2 e^{t} \\ &\frac{d z}{d t}=x+y+e^{t} \end{aligned}

comment
• # 2.II.6B

(i) Consider the equation

$\frac{\partial u}{\partial t}+\frac{\partial u}{\partial x}=\frac{\partial^{2} u}{\partial x^{2}}+f(t, x)$

and, using the change of variables $(t, x) \mapsto(s, y)=(t, x-t)$, show that it can be transformed into an equation of the form

$\frac{\partial U}{\partial s}=\frac{\partial^{2} U}{\partial y^{2}}+F(s, y)$

where $U(s, y)=u(s, y+s)$ and you should determine $F(s, y)$.

(ii) Let $H(y)$ be the Heaviside function. Find the general continuously differentiable solution of the equation

$w^{\prime \prime}(y)+H(y)=0$

(iii) Using (i) and (ii), find a continuously differentiable solution of

$\frac{\partial u}{\partial t}+\frac{\partial u}{\partial x}=\frac{\partial^{2} u}{\partial x^{2}}+H(x-t)$

such that $u(t, x) \rightarrow 0$ as $x \rightarrow-\infty$ and $u(t, x) \rightarrow-\infty$ as $x \rightarrow+\infty$

comment
• # 2.II.7B

Let $p, q$ be continuous functions and let $y_{1}(x)$ and $y_{2}(x)$ be, respectively, the solutions of the initial value problems

\begin{aligned} &y_{1}^{\prime \prime}+p(x) y_{1}^{\prime}+q(x) y_{1}=0, \quad y_{1}(0)=0, y_{1}^{\prime}(0)=1, \\ &y_{2}^{\prime \prime}+p(x) y_{2}^{\prime}+q(x) y_{2}=0, \quad y_{2}(0)=1, y_{2}^{\prime}(0)=0 . \end{aligned}

If $f$ is any continuous function show that the solution of

$y^{\prime \prime}+p(x) y^{\prime}+q(x) y=f(x), \quad y(0)=0, y^{\prime}(0)=0$

$y(x)=\int_{0}^{x} \frac{y_{1}(s) y_{2}(x)-y_{1}(x) y_{2}(s)}{W(s)} f(s) d s,$

where $W(x)=y_{1}(x) y_{2}^{\prime}(x)-y_{1}^{\prime}(x) y_{2}(x)$ is the Wronskian. Use this method to find $y=y(x)$ such that

$y^{\prime \prime}+y=\sin x, \quad y(0)=0, y^{\prime}(0)=0 .$

comment
• # 2.II.8B

Obtain a power series solution of the problem

$x y^{\prime \prime}+y=0, \quad y(0)=0, y^{\prime}(0)=1$

[You need not find the general power series solution.]

Let $y_{0}(x), y_{1}(x), y_{2}(x), \ldots$ be defined recursively as follows: $y_{0}(x)=x$. Given $y_{n-1}(x)$, define $y_{n}(x)$ to be the solution of

$x y_{n}^{\prime \prime}(x)=-y_{n-1}, \quad y_{n}(0)=0, y_{n}^{\prime}(0)=1$

By calculating $y_{1}, y_{2}, y_{3}$, or otherwise, obtain and prove a general formula for $y_{n}(x)$. Comment on the relation to the power series solution obtained previously.

comment

• # 2.I.3F

What is a convex function? State Jensen's inequality for a convex function of a random variable which takes finitely many values.

Let $p \geqslant 1$. By using Jensen's inequality, or otherwise, find the smallest constant $c_{p}$ so that

$(a+b)^{p} \leqslant c_{p}\left(a^{p}+b^{p}\right) \text { for all } a, b \geqslant 0 .$

[You may assume that $x \mapsto|x|^{p}$ is convex for $p \geqslant 1$.]

comment
• # 2.I.4F

Let $K$ be a fixed positive integer and $X$ a discrete random variable with values in $\{1,2, \ldots, K\}$. Define the probability generating function of $X$. Express the mean of $X$ in terms of its probability generating function. The Dirichlet probability generating function of $X$ is defined as

$q(z)=\sum_{n=1}^{K} \frac{1}{n^{z}} P(X=n)$

Express the mean of $X$ and the mean of $\log X$ in terms of $q(z)$.

comment
• # 2.II.10F

Let $X, Y$ be independent random variables with values in $(0, \infty)$ and the same probability density $\frac{2}{\sqrt{\pi}} e^{-x^{2}}$. Let $U=X^{2}+Y^{2}, V=Y / X$. Compute the joint probability density of $U, V$ and the marginal densities of $U$ and $V$ respectively. Are $U$ and $V$ independent?

comment
• # 2.II.11F

A normal deck of playing cards contains 52 cards, four each with face values in the set $\mathcal{F}=\{A, 2,3,4,5,6,7,8,9,10, J, Q, K\}$. Suppose the deck is well shuffled so that each arrangement is equally likely. Write down the probability that the top and bottom cards have the same face value.

Consider the following algorithm for shuffling:

S1: Permute the deck randomly so that each arrangement is equally likely.

S2: If the top and bottom cards do not have the same face value, toss a biased coin that comes up heads with probability $p$ and go back to step $\mathrm{S} 1$ if head turns up. Otherwise stop.

All coin tosses and all permutations are assumed to be independent. When the algorithm stops, let $X$ and $Y$ denote the respective face values of the top and bottom cards and compute the probability that $X=Y$. Write down the probability that $X=x$ for some $x \in \mathcal{F}$ and the probability that $Y=y$ for some $y \in \mathcal{F}$. What value of $p$ will make $X$ and $Y$ independent random variables? Justify your answer.

comment
• # 2.II.12F

Let $\gamma>0$ and define

$f(x)=\gamma \frac{1}{1+x^{2}}, \quad-\infty

Find $\gamma$ such that $f$ is a probability density function. Let $\left\{X_{i}: i \geqslant 1\right\}$ be a sequence of independent, identically distributed random variables, each having $f$ with the correct choice of $\gamma$ as probability density. Compute the probability density function of $X_{1}+\cdots+$ $X_{n}$. [You may use the identity

$m \int_{-\infty}^{\infty}\left\{\left(1+y^{2}\right)\left[m^{2}+(x-y)^{2}\right]\right\}^{-1} d y=\pi(m+1)\left\{(m+1)^{2}+x^{2}\right\}^{-1}$

valid for all $x \in \mathbb{R}$ and $m \in \mathbb{N}$.]

Deduce the probability density function of

$\frac{X_{1}+\cdots+X_{n}}{n}$

Explain why your result does not contradict the weak law of large numbers.

comment
• # 2.II.9F

Suppose that a population evolves in generations. Let $Z_{n}$ be the number of members in the $n$-th generation and $Z_{0} \equiv 1$. Each member of the $n$-th generation gives birth to a family, possibly empty, of members of the $(n+1)$-th generation; the size of this family is a random variable and we assume that the family sizes of all individuals form a collection of independent identically distributed random variables with the same generating function $G$.

Let $G_{n}$ be the generating function of $Z_{n}$. State and prove a formula for $G_{n}$ in terms of $G$. Use this to compute the variance of $Z_{n}$.

Now consider the total number of individuals in the first $n$ generations; this number is a random variable and we write $H_{n}$ for its generating function. Find a formula that expresses $H_{n+1}(s)$ in terms of $H_{n}(s), G(s)$ and $s$.

comment