• # Paper 1, Section II, 19H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with transition matrix $P$. What is a stopping time of $\left(X_{n}\right)_{n \geqslant 0}$ ? What is the strong Markov property?

The exciting game of 'Unopoly' is played by a single player on a board of 4 squares. The player starts with $£ m$ (where $m \in \mathbb{N}$ ). During each turn, the player tosses a fair coin and moves one or two places in a clockwise direction $(1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1)$ according to whether the coin lands heads or tails respectively. The player collects $£ 2$ each time they pass (or land on) square 1. If the player lands on square 3 however, they immediately lose $£ 1$ and go back to square 2. The game continues indefinitely unless the player is on square 2 with $£ 0$, in which case the player loses the game and the game ends.

(a) By setting up an appropriate Markov chain, show that if the player is at square 2 with $£ m$, where $m \geqslant 1$, the probability that they are ever at square 2 with $£(m-1)$ is $2 / 3 .$

(b) Find the probability of losing the game when the player starts on square 1 with $£ m$, where $m \geqslant 1$.

[Hint: Take the state space of your Markov chain to be $\{1,2,4\} \times\{0,1, \ldots\}$.]

comment
• # Paper 2, Section II, 18H

Let $P$ be a transition matrix on state space $I$. What does it mean for a distribution $\pi$ to be an invariant distribution? What does it mean for $\pi$ and $P$ to be in detailed balance? Show that if $\pi$ and $P$ are in detailed balance, then $\pi$ is an invariant distribution.

(a) Assuming that an invariant distribution exists, state the relationship between this and

(i) the expected return time to a state $i$;

(ii) the expected time spent in a state $i$ between visits to a state $k$.

(b) Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with transition matrix $P=\left(p_{i j}\right)_{i, j \in I}$ where $I=\{0,1,2, \ldots\}$. The transition probabilities are given for $i \geqslant 1$ by

$p_{i j}= \begin{cases}q^{-(i+2)} & \text { if } j=i+1, \\ q^{-i} & \text { if } j=i-1 \\ 1-q^{-(i+2)}-q^{-i} & \text { if } j=i\end{cases}$

where $q \geqslant 2$. For $p \in(0,1]$ let $p_{01}=p=1-p_{00}$. Compute the following, justifying your answers:

(i) The expected time spent in states $\{2,4,6, \ldots\}$ between visits to state 1 ;

(ii) The expected time taken to return to state 1 , starting from 1 ;

(iii) The expected time taken to hit state 0 starting from $1 .$

comment
• # Paper 3 , Section I, H

Consider a Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ on a state space $I$.

(a) Define the notion of a communicating class. What does it mean for a communicating class to be closed?

(b) Taking $I=\{1, \ldots, 6\}$, find the communicating classes associated with the transition matrix $P$ given by

$P=\left(\begin{array}{cccccc} 0 & 0 & 0 & 0 & \frac{1}{4} & \frac{3}{4} \\ \frac{1}{4} & 0 & 0 & 0 & \frac{1}{2} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 \\ \frac{1}{4} & \frac{1}{2} & 0 & 0 & 0 & \frac{1}{4} \\ 1 & 0 & 0 & 0 & 0 & 0 \end{array}\right)$

and identify which are closed.

(c) Find the expected time for the Markov chain with transition matrix $P$ above to reach 6 starting from 1 .

comment
• # Paper 4, Section I, H

Show that the simple symmetric random walk on $\mathbb{Z}$ is recurrent.

Three particles perform independent simple symmetric random walks on $\mathbb{Z}$. What is the probability that they are all simultaneously at 0 infinitely often? Justify your answer.

[You may assume without proof that there exist constants $A, B>0$ such that $A \sqrt{n}(n / e)^{n} \leqslant n ! \leqslant B \sqrt{n}(n / e)^{n}$ for all positive integers $\left.n .\right]$

comment

• # Paper 1, Section II, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with transition matrix $P$. What is a stopping time of $\left(X_{n}\right)_{n \geqslant 0}$ ? What is the strong Markov property?

A porter is trying to apprehend a student who is walking along a long narrow path at night. Being unaware of the porter, the student's location $Y_{n}$ at time $n \geqslant 0$ evolves as a simple symmetric random walk on $\mathbb{Z}$. The porter's initial location $Z_{0}$ is $2 m$ units to the right of the student, so $Z_{0}-Y_{0}=2 m$ where $m \geqslant 1$. The future locations $Z_{n+1}$ of the porter evolve as follows: The porter moves to the left (so $Z_{n+1}=Z_{n}-1$ ) with probability $q \in\left(\frac{1}{2}, 1\right)$, and to the right with probability $1-q$ whenever $Z_{n}-Y_{n}>2$. When $Z_{n}-Y_{n}=2$, the porter's probability of moving left changes to $r \in(0,1)$, and the probability of moving right is $1-r$.

(a) By setting up an appropriate Markov chain, show that for $m \geqslant 2$, the expected time for the porter to be a distance $2(m-1)$ away from the student is $2 /(2 q-1)$.

(b) Show that the expected time for the porter to catch the student, i.e. for their locations to coincide, is

$\frac{2}{r}+\left(m+\frac{1}{r}-2\right) \frac{2}{2 q-1} .$

[You may use without proof the fact that the time for the porter to catch the student is finite with probability 1 for any $m \geqslant 1$.]

comment
• # Paper 2, Section I, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with state space $\{1,2\}$ and transition matrix

$P=\left(\begin{array}{cc} 1-\alpha & \alpha \\ \beta & 1-\beta \end{array}\right)$

where $\alpha, \beta \in(0,1]$. Compute $\mathbb{P}\left(X_{n}=1 \mid X_{0}=1\right)$. Find the value of $\mathbb{P}\left(X_{n}=1 \mid X_{0}=2\right)$.

comment

• # Paper 1, Section II, H

Let $P$ be a transition matrix for a Markov chain $\left(X_{n}\right)$ on a state space with $N$ elements with $N<\infty$. Assume that the Markov chain is aperiodic and irreducible and let $\pi$ be its unique invariant distribution. Assume that $X_{0} \sim \pi$.

(a) Let $P^{*}(x, y)=\mathbb{P}\left[X_{0}=y \mid X_{1}=x\right]$. Show that $P^{*}(x, y)=\pi(y) P(y, x) / \pi(x)$.

(b) Let $T=\min \left\{n \geqslant 1: X_{n}=X_{0}\right\}$. Compute $\mathbb{E}[T]$ in terms of an explicit function of $N$.

(c) Suppose that a cop and a robber start from a common state chosen from $\pi$. The robber then takes one step according to $P^{*}$ and stops. The cop then moves according to $P$ independently of the robber until the cop catches the robber (i.e., the cop visits the state occupied by the robber). Compute the expected amount of time for the cop to catch the robber.

comment
• # Paper 2, Section II, H

Fix $n \geqslant 1$ and let $G$ be the graph consisting of a copy of $\{0, \ldots, n\}$ joining vertices $A$ and $B$, a copy of $\{0, \ldots, n\}$ joining vertices $B$ and $C$, and a copy of $\{0, \ldots, n\}$ joining vertices $B$ and $D$. Let $E$ be the vertex adjacent to $B$ on the segment from $B$ to $C$. Shown below is an illustration of $G$ in the case $n=5$. The vertices are solid squares and edges are indicated by straight lines.

Let $\left(X_{k}\right)$ be a simple random walk on $G$. In other words, in each time step, $X$ moves to one of its neighbours with equal probability. Assume that $X_{0}=A$.

(a) Compute the expected amount of time for $X$ to hit $B$.

(b) Compute the expected amount of time for $X$ to hit $E$. [Hint: first show that the expected amount of time $x$ for $X$ to go from $B$ to $E$ satisfies $x=\frac{1}{3}+\frac{2}{3}(L+x)$ where $L$ is the expected return time of $X$ to $B$ when starting from $B$.]

(c) Compute the expected amount of time for $X$ to hit $C$. [Hint: for each $i$, let $v_{i}$ be the vertex which is $i$ places to the right of $B$ on the segment from $B$ to $C$. Derive an equation for the expected amount of time $x_{i}$ for $X$ to go from $v_{i}$ to $v_{i+1}$.]

comment
• # Paper 3, Section I, H

Suppose that $\left(X_{n}\right)$ is a Markov chain with state space $S$.

(a) Give the definition of a communicating class.

(b) Give the definition of the period of a state $a \in S$.

(c) Show that if two states communicate then they have the same period.

comment
• # Paper 4, Section I, H

For a Markov chain $X$ on a state space $S$ with $u, v \in S$, we let $p_{u v}(n)$ for $n \in\{0,1, \ldots\}$ be the probability that $X_{n}=v$ when $X_{0}=u$.

(a) Let $X$ be a Markov chain. Prove that if $X$ is recurrent at a state $v$, then $\sum_{n=0}^{\infty} p_{v v}(n)=\infty$. [You may use without proof that the number of returns of a Markov chain to a state $v$ when starting from $v$ has the geometric distribution.]

(b) Let $X$ and $Y$ be independent simple symmetric random walks on $\mathbb{Z}^{2}$ starting from the origin 0 . Let $Z=\sum_{n=0}^{\infty} \mathbf{1}_{\left\{X_{n}=Y_{n}\right\}}$. Prove that $\mathbb{E}[Z]=\sum_{n=0}^{\infty} p_{00}(2 n)$ and deduce that $\mathbb{E}[Z]=\infty$. [You may use without proof that $p_{x y}(n)=p_{y x}(n)$ for all $x, y \in \mathbb{Z}^{2}$ and $n \in \mathbb{N}$, and that $X$ is recurrent at 0.]

comment

• # Paper 1, Section II, H

A coin-tossing game is played by two players, $A_{1}$ and $A_{2}$. Each player has a coin and the probability that the coin tossed by player $A_{i}$ comes up heads is $p_{i}$, where $0. The players toss their coins according to the following scheme: $A_{1}$ tosses first and then after each head, $A_{2}$ pays $A_{1}$ one pound and $A_{1}$ has the next toss, while after each tail, $A_{1}$ pays $A_{2}$ one pound and $A_{2}$ has the next toss.

Define a Markov chain to describe the state of the game. Find the probability that the game ever returns to a state where neither player has lost money.

comment
• # Paper 2, Section II, H

For a finite irreducible Markov chain, what is the relationship between the invariant probability distribution and the mean recurrence times of states?

A particle moves on the $2^{n}$ vertices of the hypercube, $\{0,1\}^{n}$, in the following way: at each step the particle is equally likely to move to each of the $n$ adjacent vertices, independently of its past motion. (Two vertices are adjacent if the Euclidean distance between them is one.) The initial vertex occupied by the particle is $(0,0, \ldots, 0)$. Calculate the expected number of steps until the particle

(i) first returns to $(0,0, \ldots, 0)$,

(ii) first visits $(0,0, \ldots, 0,1)$,

(iii) first visits $(0,0, \ldots, 0,1,1)$.

comment
• # Paper 3, Section I, H

The mathematics course at the University of Barchester is a three-year one. After the end-of-year examinations there are three possibilities:

(i) failing and leaving (probability $p$ );

(ii) taking that year again (probability $q$ );

(iii) going on to the next year (or graduating, if the current year is the third one) (probability $r$ ).

Thus there are five states for a student $\left(1^{\text {st }}\right.$ year, $2^{\text {nd }}$year, $3^{\text {rd }}$year, left without a degree, graduated).

Write down the $5 \times 5$ transition matrix. Classify the states, assuming $p, q, r \in(0,1)$. Find the probability that a student will eventually graduate.

comment
• # Paper 4, Section I, H

Let $P=\left(p_{i j}\right)_{i, j \in S}$ be the transition matrix for an irreducible Markov chain on the finite state space $S$.

(a) What does it mean to say that a distribution $\pi$ is the invariant distribution for the chain?

(b) What does it mean to say that the chain is in detailed balance with respect to a distribution $\pi$ ? Show that if the chain is in detailed balance with respect to a distribution $\pi$ then $\pi$ is the invariant distribution for the chain.

(c) A symmetric random walk on a connected finite graph is the Markov chain whose state space is the set of vertices of the graph and whose transition probabilities are

$p_{i j}= \begin{cases}1 / D_{i} & \text { if } j \text { is adjacent to } i \\ 0 & \text { otherwise }\end{cases}$

where $D_{i}$ is the number of vertices adjacent to vertex $i$. Show that the random walk is in detailed balance with respect to its invariant distribution.

comment

• # Paper 1, Section II, H

A rich and generous man possesses $n$ pounds. Some poor cousins arrive at his mansion. Being generous he decides to give them money. On day 1 , he chooses uniformly at random an integer between $n-1$ and 1 inclusive and gives it to the first cousin. Then he is left with $x$ pounds. On day 2 , he chooses uniformly at random an integer between $x-1$ and 1 inclusive and gives it to the second cousin and so on. If $x=1$ then he does not give the next cousin any money. His choices of the uniform numbers are independent. Let $X_{i}$ be his fortune at the end of day $i$.

Show that $X$ is a Markov chain and find its transition probabilities.

Let $\tau$ be the first time he has 1 pound left, i.e. $\tau=\min \left\{i \geqslant 1: X_{i}=1\right\}$. Show that

$\mathbb{E}[\tau]=\sum_{i=1}^{n-1} \frac{1}{i}$

comment
• # Paper 2, Section II, H

Let $Y_{1}, Y_{2}, \ldots$ be i.i.d. random variables with values in $\{1,2, \ldots\}$ and $\mathbb{E}\left[Y_{1}\right]=\mu<\infty$. Moreover, suppose that the greatest common divisor of $\left\{n: \mathbb{P}\left(Y_{1}=n\right)>0\right\}$ is 1 . Consider the following process

$X_{n}=\inf \left\{m \geqslant n: Y_{1}+\ldots+Y_{k}=m, \text { for some } k \geqslant 0\right\}-n .$

(a) Show that $X$ is a Markov chain and find its transition probabilities.

(b) Let $T_{0}=\inf \left\{n \geqslant 1: X_{n}=0\right\}$. Find $\mathbb{E}_{0}\left[T_{0}\right]$.

(c) Find the limit as $n \rightarrow \infty$ of $\mathbb{P}\left(X_{n}=0\right)$. State carefully any theorems from the course that you are using.

comment
• # Paper 3, Section I, H

(a) What does it mean to say that a Markov chain is reversible?

(b) Let $G$ be a finite connected graph on $n$ vertices. What does it mean to say that $X$ is a simple random walk on $G$ ?

Find the unique invariant distribution $\pi$ of $X$.

Show that $X$ is reversible when $X_{0} \sim \pi$.

[You may use, without proof, results about detailed balance equations, but you should state them clearly.]

comment
• # Paper 4, Section I, $\mathbf{9 H}$

Prove that the simple symmetric random walk on $\mathbb{Z}^{3}$ is transient.

[Any combinatorial inequality can be used without proof.]

comment

• # Paper 1, Section II, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a simple symmetric random walk on the integers, starting at $X_{0}=0$.

(a) What does it mean to say that a Markov chain is irreducible? What does it mean to say that an irreducible Markov chain is recurrent? Show that $\left(X_{n}\right)_{n} \geqslant 0$ is irreducible and recurrent.

[Hint: You may find it helpful to use the limit

$\lim _{k \rightarrow \infty} \sqrt{k} 2^{-2 k}\left(\begin{array}{c} 2 k \\ k \end{array}\right)=\sqrt{\pi}$

You may also use without proof standard necessary and sufficient conditions for recurrence.]

(b) What does it mean to say that an irreducible Markov chain is positive recurrent? Determine, with proof, whether $\left(X_{n}\right)_{n \geqslant 0}$ is positive recurrent.

(c) Let

$T=\inf \left\{n \geqslant 1: X_{n}=0\right\}$

be the first time the chain returns to the origin. Compute $\mathbb{E}\left[s^{T}\right]$ for a fixed number $0.

comment
• # Paper 2, Section II, H

(a) Prove that every open communicating class of a Markov chain is transient. Prove that every finite transient communicating class is open. Give an example of a Markov chain with an infinite transient closed communicating class.

(b) Consider a Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ with state space $\{a, b, c, d\}$ and transition probabilities given by the matrix

$P=\left(\begin{array}{cccc} 1 / 3 & 0 & 1 / 3 & 1 / 3 \\ 0 & 1 / 4 & 0 & 3 / 4 \\ 1 / 2 & 1 / 2 & 0 & 0 \\ 0 & 2 / 3 & 0 & 1 / 3 \end{array}\right)$

(i) Compute $\mathbb{P}\left(X_{n}=b \mid X_{0}=d\right)$ for a fixed $n \geqslant 0$.

(ii) Compute $\mathbb{P}\left(X_{n}=c\right.$ for some $\left.n \geqslant 1 \mid X_{0}=a\right)$.

(iii) Show that $P^{n}$ converges as $n \rightarrow \infty$, and determine the limit.

[Results from lectures can be used without proof if stated carefully.]

comment
• # Paper 3, Section I, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain such that $X_{0}=i$. Prove that

$\sum_{n=0}^{\infty} \mathbb{P}_{i}\left(X_{n}=i\right)=\frac{1}{\mathbb{P}_{i}\left(X_{n} \neq i \text { for all } n \geqslant 1\right)}$

where $1 / 0=+\infty$. [You may use the strong Markov property without proof.]

comment
• # Paper 4, Section I, H

Consider two boxes, labelled $\mathrm{A}$ and B. Initially, there are no balls in box $\mathrm{A}$ and $k$ balls in box B. Each minute later, one of the $k$ balls is chosen uniformly at random and is moved to the opposite box. Let $X_{n}$ denote the number of balls in box A at time $n$, so that $X_{0}=0$.

(a) Find the transition probabilities of the Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ and show that it is reversible in equilibrium.

(b) Find $\mathbb{E}(T)$, where $T=\inf \left\{n \geqslant 1: X_{n}=0\right\}$ is the next time that all $k$ balls are again in box $B$.

comment

• # Paper 1, Section II, H

Consider a particle moving between the vertices of the graph below, taking steps along the edges. Let $X_{n}$ be the position of the particle at time $n$. At time $n+1$ the particle moves to one of the vertices adjoining $X_{n}$, with each of the adjoining vertices being equally likely, independently of previous moves. Explain briefly why $\left(X_{n} ; n \geqslant 0\right)$ is a Markov chain on the vertices. Is this chain irreducible? Find an invariant distribution for this chain.

Suppose that the particle starts at $B$. By adapting the transition matrix, or otherwise, find the probability that the particle hits vertex $A$ before vertex $F$.

Find the expected first passage time from $B$ to $F$ given no intermediate visit to $A$.

[Results from the course may be used without proof provided that they are clearly stated.]

comment
• # Paper 2, Section II, H

(a) What does it mean for a transition matrix $P$ and a distribution $\lambda$ to be in detailed balance? Show that if $P$ and $\lambda$ are in detailed balance then $\lambda=\lambda P$.

(b) A mathematician owns $r$ bicycles, which she sometimes uses for her journey from the station to College in the morning and for the return journey in the evening. If it is fine weather when she starts a journey, and if there is a bicycle available at the current location, then she cycles; otherwise she takes the bus. Assume that with probability $p$, $0, it is fine when she starts a journey, independently of all other journeys. Let $X_{n}$ denote the number of bicycles at the current location, just before the mathematician starts the $n$th journey.

(i) Show that $\left(X_{n} ; n \geqslant 0\right)$ is a Markov chain and write down its transition matrix.

(ii) Find the invariant distribution of the Markov chain.

(iii) Show that the Markov chain satisfies the necessary conditions for the convergence theorem for Markov chains and find the limiting probability that the mathematician's $n$th journey is by bicycle.

[Results from the course may be used without proof provided that they are clearly stated.]

comment
• # Paper 3, Section I, H

Define what is meant by a communicating class and a closed class in a Markov chain.

A Markov chain $\left(X_{n}: n \geqslant 0\right)$ with state space $\{1,2,3,4\}$ has transition matrix

$P=\left(\begin{array}{cccc} \frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 0 & \frac{1}{2} & 0 & \frac{1}{2} \\ \frac{1}{2} & 0 & \frac{1}{2} & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array}\right)$

Write down the communicating classes for this Markov chain and state whether or not each class is closed.

If $X_{0}=2$, let $N$ be the smallest $n$ such that $X_{n} \neq 2$. Find $\mathbb{P}(N=n)$ for $n=1,2, \ldots$ and $\mathbb{E}(N)$. Describe the evolution of the chain if $X_{0}=2$.

comment
• # Paper 4, Section I, H

Let $X_{0}, X_{1}, X_{2}, \ldots$ be independent identically distributed random variables with $\mathbb{P}\left(X_{i}=1\right)=1-\mathbb{P}\left(X_{i}=0\right)=p, 0. Let $Z_{n}=X_{n-1}+c X_{n}, n=1,2, \ldots$, where $c$ is a constant. For each of the following cases, determine whether or not $\left(Z_{n}: n \geqslant 1\right)$ is a Markov chain: (a) $c=0$; (b) $c=1$; (c) $c=2$.

In each case, if $\left(Z_{n}: n \geqslant 1\right)$ is a Markov chain, explain why, and give its state space and transition matrix; if it is not a Markov chain, give an example to demonstrate that it is not.

comment

• # Paper 1, Section II, 20H

Consider a homogeneous Markov chain $\left(X_{n}: n \geqslant 0\right)$ with state space $S$ and transition $\operatorname{matrix} P=\left(p_{i, j}: i, j \in S\right)$. For a state $i$, define the terms aperiodic, positive recurrent and ergodic.

Let $S=\{0,1,2, \ldots\}$ and suppose that for $i \geqslant 1$ we have $p_{i, i-1}=1$ and

$p_{0,0}=0, p_{0, j}=p q^{j-1}, j=1,2, \ldots,$

where $p=1-q \in(0,1)$. Show that this Markov chain is irreducible.

Let $T_{0}=\inf \left\{n \geqslant 1: X_{n}=0\right\}$ be the first passage time to 0 . Find $\mathbb{P}\left(T_{0}=n \mid X_{0}=0\right)$ and show that state 0 is ergodic.

Find the invariant distribution $\pi$ for this Markov chain. Write down:

(i) the mean recurrence time for state $i, i \geqslant 1$;

(ii) $\lim _{n \rightarrow \infty} \mathbb{P}\left(X_{n} \neq 0 \mid X_{0}=0\right)$.

[Results from the course may be quoted without proof, provided they are clearly stated.]

comment
• # Paper 2, Section II, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $\mathrm{S}$ and transition matrix $P=\left(p_{i, j}: i, j \in S\right)$. For $A \subseteq S$, let

$H^{A}=\inf \left\{n \geqslant 0: X_{n} \in A\right\} \text { and } h_{i}^{A}=\mathbb{P}\left(H^{A}<\infty \mid X_{0}=i\right), i \in S$

Prove that $h^{A}=\left(h_{i}^{A}: i \in S\right)$ is the minimal non-negative solution to the equations

$h_{i}^{A}= \begin{cases}1 & \text { for } i \in A \\ \sum_{j \in S} p_{i, j} h_{j}^{A} & \text { otherwise. }\end{cases}$

Three people $A, B$ and $C$ play a series of two-player games. In the first game, two people play and the third person sits out. Any subsequent game is played between the winner of the previous game and the person sitting out the previous game. The overall winner of the series is the first person to win two consecutive games. The players are evenly matched so that in any game each of the two players has probability $\frac{1}{2}$ of winning the game, independently of all other games. For $n=1,2, \ldots$, let $X_{n}$ be the ordered pair consisting of the winners of games $n$ and $n+1$. Thus the state space is $\{A A, A B, A C, B A, B B, B C, C A, C B, C C\}$, and, for example, $X_{1}=A C$ if $A$ wins the first game and $C$ wins the second.

The first game is between $A$ and $B$. Treating $A A, B B$ and $C C$ as absorbing states, or otherwise, find the probability of winning the series for each of the three players.

comment
• # Paper 3, Section I, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $S$. For $i, j$ in $S$ let $p_{i, j}(n)$ denote the $n$-step transition probability $\mathbb{P}\left(X_{n}=j \mid X_{0}=i\right)$.

(i) Express the $(m+n)$-step transition probability $p_{i, j}(m+n)$ in terms of the $n$-step and $m$-step transition probabilities.

(ii) Write $i \rightarrow j$ if there exists $n \geqslant 0$ such that $p_{i, j}(n)>0$, and $i \leftrightarrow j$ if $i \rightarrow j$ and $j \rightarrow i$. Prove that if $i \leftrightarrow j$ and $i \neq j$ then either both $i$ and $j$ are recurrent or both $i$ and $j$ are transient. [You may assume that a state $i$ is recurrent if and only if $\sum_{n=0}^{\infty} p_{i, i}(n)=\infty$, and otherwise $i$ is transient.]

(iii) A Markov chain has state space $\{0,1,2,3\}$ and transition matrix

$\left(\begin{array}{cccc} \frac{1}{2} & \frac{1}{3} & 0 & \frac{1}{6} \\ 0 & \frac{3}{4} & 0 & \frac{1}{4} \\ \frac{1}{2} & \frac{1}{2} & 0 & 0 \\ \frac{1}{2} & 0 & 0 & \frac{1}{2} \end{array}\right)$

For each state $i$, determine whether $i$ is recurrent or transient. [Results from the course may be quoted without proof, provided they are clearly stated.]

comment
• # Paper 4, Section I, H

Let $\left(X_{n}: n \geqslant 0\right)$ be a homogeneous Markov chain with state space $S$ and transition $\operatorname{matrix} P=\left(p_{i, j}: i, j \in S\right)$.

(a) Let $W_{n}=X_{2 n}, n=0,1,2, \ldots$ Show that $\left(W_{n}: n \geqslant 0\right)$ is a Markov chain and give its transition matrix. If $\lambda_{i}=\mathbb{P}\left(X_{0}=i\right), i \in S$, find $\mathbb{P}\left(W_{1}=0\right)$ in terms of the $\lambda_{i}$ and the $p_{i, j}$.

[Results from the course may be quoted without proof, provided they are clearly stated.]

(b) Suppose that $S=\{-1,0,1\}, p_{0,1}=p_{-1,-1}=0$ and $p_{-1,0} \neq p_{1,0}$. Let $Y_{n}=\left|X_{n}\right|$, $n=0,1,2, \ldots$ In terms of the $p_{i, j}$, find

(i) $\mathbb{P}\left(Y_{n+1}=0 \mid Y_{n}=1, Y_{n-1}=0\right)$ and

(ii) $\mathbb{P}\left(Y_{n+1}=0 \mid Y_{n}=1, Y_{n-1}=1, Y_{n-2}=0\right)$.

What can you conclude about whether or not $\left(Y_{n}: n \geqslant 0\right)$ is a Markov chain?

comment

• # Paper 1, Section II, 20H

A Markov chain has state space $\{a, b, c\}$ and transition matrix

$P=\left(\begin{array}{ccc} 0 & 3 / 5 & 2 / 5 \\ 3 / 4 & 0 & 1 / 4 \\ 2 / 3 & 1 / 3 & 0 \end{array}\right)$

where the rows $1,2,3$ correspond to $a, b, c$, respectively. Show that this Markov chain is equivalent to a random walk on some graph with 6 edges.

Let $k(i, j)$ denote the mean first passage time from $i$ to $j$.

(i) Find $k(a, a)$ and $k(a, b)$.

(ii) Given $X_{0}=a$, find the expected number of steps until the walk first completes a step from $b$ to $c$.

(iii) Suppose the distribution of $X_{0}$ is $\left(\pi_{1}, \pi_{2}, \pi_{3}\right)=(5,4,3) / 12$. Let $\tau(a, b)$ be the least $m$ such that $\{a, b\}$ appears as a subsequence of $\left\{X_{0}, X_{1}, \ldots, X_{m}\right\}$. By comparing the distributions of $\left\{X_{0}, X_{1}, \ldots, X_{m}\right\}$ and $\left\{X_{m}, \ldots, X_{1}, X_{0}\right\}$ show that $E \tau(a, b)=E \tau(b, a)$ and that

$k(b, a)-k(a, b)=\sum_{i \in\{a, b, c\}} \pi_{i}[k(i, a)-k(i, b)]$

comment
• # Paper 2, Section II, H

(i) Suppose $\left(X_{n}\right)_{n \geqslant 0}$ is an irreducible Markov chain and $f_{i j}=P\left(X_{n}=j\right.$ for some $\left.n \geqslant 1 \mid X_{0}=i\right)$. Prove that $f_{i i} \geqslant f_{i j} f_{j i}$ and that

$\sum_{n=0}^{\infty} P_{i}\left(X_{n}=i\right)=\sum_{n=1}^{\infty} f_{i i}^{n-1}$

(ii) Let $\left(X_{n}\right)_{n \geqslant 0}$ be a symmetric random walk on the $\mathbb{Z}^{2}$ lattice. Prove that $\left(X_{n}\right)_{n \geqslant 0}$ is recurrent. You may assume, for $n \geqslant 1$,

$1 / 2<2^{-2 n} \sqrt{n}\left(\begin{array}{c} 2 n \\ n \end{array}\right)<1$

(iii) A princess and monster perform independent random walks on the $\mathbb{Z}^{2}$ lattice. The trajectory of the princess is the symmetric random walk $\left(X_{n}\right)_{n \geqslant 0}$. The monster's trajectory, denoted $\left(Z_{n}\right)_{n \geqslant 0}$, is a sleepy version of an independent symmetric random walk $\left(Y_{n}\right)_{n \geqslant 0}$. Specifically, given an infinite sequence of integers $0=n_{0}, the monster sleeps between these times, so $Z_{n_{i}+1}=\cdots=Z_{n_{i+1}}=Y_{i+1}$. Initially, $X_{0}=(100,0)$ and $Z_{0}=Y_{0}=(0,100)$. The princess is captured if and only if at some future time she and the monster are simultaneously at $(0,0)$.

Compare the capture probabilities for an active monster, who takes $n_{i+1}=n_{i}+1$ for all $i$, and a sleepy monster, who takes $n_{i}$ spaced sufficiently widely so that

$P\left(X_{k}=(0,0) \text { for some } k \in\left\{n_{i}+1, \ldots, n_{i+1}\right\}\right)>1 / 2$

comment
• # Paper 3, Section I, H

Prove that if a distribution $\pi$ is in detailed balance with a transition matrix $P$ then it is an invariant distribution for $P$.

Consider the following model with 2 urns. At each time, $t=0,1, \ldots$ one of the following happens:

• with probability $\beta$ a ball is chosen at random and moved to the other urn (but nothing happens if both urns are empty);

• with probability $\gamma$ a ball is chosen at random and removed (but nothing happens if both urns are empty);

• with probability $\alpha$ a new ball is added to a randomly chosen urn,

where $\alpha+\beta+\gamma=1$ and $\alpha<\gamma$. State $(i, j)$ denotes that urns 1,2 contain $i$ and $j$ balls respectively. Prove that there is an invariant measure

$\lambda_{i, j}=\frac{(i+j) !}{i ! j !}(\alpha / 2 \gamma)^{i+j}$

Find the proportion of time for which there are $n$ balls in the system.

comment
• # Paper 4, Section I, H

Suppose $P$ is the transition matrix of an irreducible recurrent Markov chain with state space $I$. Show that if $x$ is an invariant measure and $x_{k}>0$ for some $k \in I$, then $x_{j}>0$ for all $j \in I$.

Let

$\gamma_{j}^{k}=p_{k j}+\sum_{t=1}^{\infty} \sum_{i_{1} \neq k, \ldots, i_{t} \neq k} p_{k i_{t}} p_{i_{t} i_{t-1}} \cdots p_{i_{1} j}$

Give a meaning to $\gamma_{j}^{k}$ and explain why $\gamma_{k}^{k}=1$.

Suppose $x$ is an invariant measure with $x_{k}=1$. Prove that $x_{j} \geqslant \gamma_{j}^{k}$ for all $j$.

comment

• # Paper 1, Section II, 20H

A Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ has as its state space the integers, with

$p_{i, i+1}=p, \quad p_{i, i-1}=q=1-p$

and $p_{i j}=0$ otherwise. Assume $p>q$.

Let $T_{j}=\inf \left\{n \geqslant 1: X_{n}=j\right\}$ if this is finite, and $T_{j}=\infty$ otherwise. Let $V_{0}$ be the total number of hits on 0 , and let $V_{0}(n)$ be the total number of hits on 0 within times $0, \ldots, n-1$. Let

\begin{aligned} h_{i} &=P\left(V_{0}>0 \mid X_{0}=i\right) \\ r_{i}(n) &=E\left[V_{0}(n) \mid X_{0}=i\right] \\ r_{i} &=E\left[V_{0} \mid X_{0}=i\right] \end{aligned}

(i) Quoting an appropriate theorem, find, for every $i$, the value of $h_{i}$.

(ii) Show that if $\left(x_{i}, i \in \mathbb{Z}\right)$ is any non-negative solution to the system of equations

\begin{aligned} x_{0} &=1+q x_{1}+p x_{-1}, \\ x_{i} &=q x_{i-1}+p x_{i+1}, \quad \text { for all } i \neq 0 \end{aligned}

then $x_{i} \geqslant r_{i}(n)$ for all $i$ and $n$.

(iii) Show that $P\left(V_{0}\left(T_{1}\right) \geqslant k \mid X_{0}=1\right)=q^{k}$ and $E\left[V_{0}\left(T_{1}\right) \mid X_{0}=1\right]=q / p$.

(iv) Explain why $r_{i+1}=(q / p) r_{i}$ for $i>0$.

(v) Find $r_{i}$ for all $i$.

comment
• # Paper 2, Section II, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be the symmetric random walk on vertices of a connected graph. At each step this walk jumps from the current vertex to a neighbouring vertex, choosing uniformly amongst them. Let $T_{i}=\inf \left\{n \geqslant 1: X_{n}=i\right\}$. For each $i \neq j$ let $q_{i j}=P\left(T_{j} and $m_{i j}=E\left(T_{j} \mid X_{0}=i\right)$. Stating any theorems that you use:

(i) Prove that the invariant distribution $\pi$ satisfies detailed balance.

(ii) Use reversibility to explain why $\pi_{i} q_{i j}=\pi_{j} q_{j i}$ for all $i, j$.

Consider a symmetric random walk on the graph shown below.

(iii) Find $m_{33}$.

(iv) The removal of any edge $(i, j)$ leaves two disjoint components, one which includes $i$ and one which includes $j$. Prove that $m_{i j}=1+2 e_{i j}(i)$, where $e_{i j}(i)$ is the number of edges in the component that contains $i$.

(v) Show that $m_{i j}+m_{j i} \in\{18,36,54,72\}$ for all $i \neq j$.

comment
• # Paper 3, Section I, H

A runner owns $k$ pairs of running shoes and runs twice a day. In the morning she leaves her house by the front door, and in the evening she leaves by the back door. On starting each run she looks for shoes by the door through which she exits, and runs barefoot if none are there. At the end of each run she is equally likely to return through the front or back doors. She removes her shoes (if any) and places them by the door. In the morning of day 1 all shoes are by the back door so she must run barefoot.

Let $p_{00}^{(n)}$ be the probability that she runs barefoot on the morning of day $n+1$. What conditions are satisfied in this problem which ensure $\lim _{n \rightarrow \infty} p_{00}^{(n)}$ exists? Show that its value is $1 / 2 k$.

Find the expected number of days that will pass until the first morning that she finds all $k$ pairs of shoes at her front door.

comment
• # Paper 4, Section I, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be an irreducible Markov chain with $p_{i j}^{(n)}=P\left(X_{n}=j \mid X_{0}=i\right)$. Define the meaning of the statements:

(i) state $i$ is transient,

(ii) state $i$ is aperiodic.

Give a criterion for transience that can be expressed in terms of the probabilities $\left(p_{i i}^{(n)}, n=0,1, \ldots\right)$.

Prove that if a state $i$ is transient then all states are transient.

Prove that if a state $i$ is aperiodic then all states are aperiodic.

Suppose that $p_{i i}^{(n)}=0$ unless $n$ is divisible by 3 . Given any other state $j$, prove that $p_{j j}^{(n)}=0$ unless $n$ is divisible by 3 .

comment

• # Paper 1, Section II, H

Let $P=\left(p_{i j}\right)_{i, j \in S}$ be the transition matrix for an irreducible Markov chain on the finite state space $S$.

(i) What does it mean to say $\pi$ is the invariant distribution for the chain?

(ii) What does it mean to say the chain is in detailed balance with respect to $\pi$ ?

(iii) A symmetric random walk on a connected finite graph is the Markov chain whose state space is the set of vertices of the graph and whose transition probabilities are

$p_{i j}= \begin{cases}1 / D_{i} & \text { if } j \text { is adjacent to } i \\ 0 & \text { otherwise }\end{cases}$

where $D_{i}$ is the number of vertices adjacent to vertex $i$. Show that the random walk is in detailed balance with respect to its invariant distribution.

(iv) Let $\pi$ be the invariant distribution for the transition matrix $P$, and define an inner product for vectors $x, y \in \mathbb{R}^{S}$ by the formula

$\langle x, y\rangle=\sum_{i \in S} x_{i} \pi_{i} y_{i}$

Show that the equation

$\langle x, P y\rangle=\langle P x, y\rangle$

holds for all vectors $x, y \in \mathbb{R}^{S}$ if and only if the chain is in detailed balance with respect to $\pi$. [Here $z \in \mathbb{R}^{S}$ means $z=\left(z_{i}\right)_{i \in S}$.]

comment
• # Paper 2, Section II, H

(i) Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain on the finite state space $S$ with transition matrix $P$. Fix a subset $A \subseteq S$, and let

$H=\inf \left\{n \geqslant 0: X_{n} \in A\right\} .$

Fix a function $g$ on $S$ such that $0 for all $i \in S$, and let

$V_{i}=\mathbb{E}\left[\prod_{n=0}^{H-1} g\left(X_{n}\right) \mid X_{0}=i\right]$

where $\prod_{n=0}^{-1} a_{n}=1$ by convention. Show that

$V_{i}= \begin{cases}1 & \text { if } i \in A \\ g(i) \sum_{j \in S} P_{i j} V_{j} & \text { otherwise. }\end{cases}$

(ii) A flea lives on a polyhedron with $N$ vertices, labelled $1, \ldots, N$. It hops from vertex to vertex in the following manner: if one day it is on vertex $i>1$, the next day it hops to one of the vertices labelled $1, \ldots, i-1$ with equal probability, and it dies upon reaching vertex 1. Let $X_{n}$ be the position of the flea on day $n$. What are the transition probabilities for the Markov chain $\left(X_{n}\right)_{n \geqslant 0}$ ?

(iii) Let $H$ be the number of days the flea is alive, and let

$V_{i}=\mathbb{E}\left(s^{H} \mid X_{0}=i\right)$

where $s$ is a real number such that $0. Show that $V_{1}=1$ and

$\frac{i}{s} V_{i+1}=V_{i}+\frac{i-1}{s} V_{i}$

for $i \geqslant 1$. Conclude that

$\mathbb{E}\left(s^{H} \mid X_{0}=N\right)=\prod_{i=1}^{N-1}\left(1+\frac{s-1}{i}\right)$

[Hint. Use part (i) with $A=\{1\}$ and a well-chosen function $g$. ]

(iv) Show that

$\mathbb{E}\left(H \mid X_{0}=N\right)=\sum_{i=1}^{N-1} \frac{1}{i}$

comment
• # Paper 3, Section I, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain with state space $S$.

(i) What does it mean to say that $\left(X_{n}\right)_{n} \geqslant 0$ has the strong Markov property? Your answer should include the definition of the term stopping time.

(ii) Show that

$\mathbb{P}\left(X_{n}=i \text { at least } k \text { times } \mid X_{0}=i\right)=\left[\mathbb{P}\left(X_{n}=i \text { at least once } \mid X_{0}=i\right)\right]^{k}$

for a state $i \in S$. You may use without proof the fact that $\left(X_{n}\right)_{n \geqslant 0}$ has the strong Markov property.

comment
• # Paper 4, Section I, H

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain on a state space $S$, and let $p_{i j}(n)=\mathbb{P}\left(X_{n}=j \mid X_{0}=i\right)$.

(i) What does the term communicating class mean in terms of this chain?

(ii) Show that $p_{i i}(m+n) \geqslant p_{i j}(m) p_{j i}(n)$.

(iii) The period $d_{i}$ of a state $i$ is defined to be

$d_{i}=\operatorname{gcd}\left\{n \geqslant 1: p_{i i}(n)>0\right\}$

Show that if $i$ and $j$ are in the same communicating class and $p_{j j}(r)>0$, then $d_{i}$ divides $r$.

comment

• # Paper 1, Section II, E

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a Markov chain.

(a) What does it mean to say that a state $i$ is positive recurrent? How is this property related to the equilibrium probability $\pi_{i}$ ? You do not need to give a full proof, but you should carefully state any theorems you use.

(b) What is a communicating class? Prove that if states $i$ and $j$ are in the same communicating class and $i$ is positive recurrent then $j$ is positive recurrent also.

A frog is in a pond with an infinite number of lily pads, numbered $1,2,3, \ldots$ She hops from pad to pad in the following manner: if she happens to be on pad $i$ at a given time, she hops to one of pads $(1,2, \ldots, i, i+1)$ with equal probability.

(c) Find the equilibrium distribution of the corresponding Markov chain.

(d) Now suppose the frog starts on pad $k$ and stops when she returns to it. Show that the expected number of times the frog hops is $e(k-1)$ ! where $e=2.718 \ldots$ What is the expected number of times she will visit the lily pad $k+1$ ?

comment
• # Paper 2, Section II, E

Let $\left(X_{n}\right)_{n \geqslant 0}$ be a simple, symmetric random walk on the integers $\{\ldots,-1,0,1, \ldots\}$, with $X_{0}=0$ and $\mathbb{P}\left(X_{n+1}=i \pm 1 \mid X_{n}=i\right)=1 / 2$. For each integer $a \geqslant 1$, let $T_{a}=\inf \left\{n \geqslant 0: X_{n}=a\right\}$. Show that $T_{a}$ is a stopping time.

Define a random variable $Y_{n}$ by the rule

$Y_{n}= \begin{cases}X_{n} & \text { if } n

Show that $\left(Y_{n}\right)_{n} \geqslant 0$ is also a simple, symmetric random walk.

Let $M_{n}=\max _{0 \leqslant i \leqslant n} X_{n}$. Explain why $\left\{M_{n} \geqslant a\right\}=\left\{T_{a} \leqslant n\right\}$ for $a \geqslant 0$. By using the process $\left(Y_{n}\right)_{n \geqslant 0}$ constructed above, show that, for $a \geqslant 0$,

$\mathbb{P}\left(M_{n} \geqslant a, X_{n} \leqslant a-1\right)=\mathbb{P}\left(X_{n} \geqslant a+1\right),$

and thus

$\mathbb{P}\left(M_{n} \geqslant a\right)=\mathbb{P}\left(X_{n} \geqslant a\right)+\mathbb{P}\left(X_{n} \geqslant a+1\right)$

Hence compute

$\mathbb{P}\left(M_{n}=a\right)$

when $a$ and $n$ are positive integers with $n \geqslant a$. [Hint: if $n$ is even, then $X_{n}$ must be even, and if $n$ is odd, then $X_{n}$ must be odd.]

comment
• # Paper 3, Section I, E

An intrepid tourist tries to ascend Springfield's famous infinite staircase on an icy day. When he takes a step with his right foot, he reaches the next stair with probability $1 / 2$, otherwise he falls down and instantly slides back to the bottom with probability $1 / 2$. Similarly, when he steps with his left foot, he reaches the next stair with probability $1 / 3$, or slides to the bottom with probability $2 / 3$. Assume that he always steps first with his right foot when he is at the bottom, and alternates feet as he ascends. Let $X_{n}$ be his position after his $n$th step, so that $X_{n}=i$ when he is on the stair $i, i=0,1,2, \ldots$, where 0 is the bottom stair.

(a) Specify the transition probabilities $p_{i j}$ for the Markov chain $\left(X_{n}\right)_{n} \geqslant 0$ for any $i, j \geqslant 0$.

(b) Find the equilibrium probabilities $\pi_{i}$, for $i \geqslant 0$. [Hint: $\left.\pi_{0}=5 / 9 .\right]$

(c) Argue that the chain is irreducible and aperiodic and evaluate the limit

$\lim _{n \rightarrow \infty} \mathbb{P}\left(X_{n}=i\right)$

for each $i \geqslant 0$.

comment
• # Paper 4, Section I, E

Consider a Markov chain $\left(X_{n}\right)_{n} \geqslant 0$ with state space $\{a, b, c, d\}$ and transition probabilities given by the following table.

\begin{tabular}{c|cccc} & $a$ & $b$ & $c$ & $d$ \ \hline$a$ & $1 / 4$ & $1 / 4$ & $1 / 2$ & 0 \ $b$ & 0 & $1 / 4$ & 0 & $3 / 4$ \ $c$ & $1 / 2$ & 0 & $1 / 4$ & $1 / 4$ \ $d$ & 0 & $1 / 2$ & 0 & $1 / 2$ \end{tabular}

By drawing an appropriate diagram, determine the communicating classes of the chain, and classify them as either open or closed. Compute the following transition and hitting probabilities:

• $\mathbb{P}\left(X_{n}=b \mid X_{0}=d\right)$ for a fixed $n \geqslant 0$

• $\mathbb{P}\left(X_{n}=c\right.$ for some $\left.n \geqslant 1 \mid X_{0}=a\right)$.

comment

• # Paper 1, Section II, H

A gerbil is introduced into a maze at the node labelled 0 in the diagram. It roams at random through the maze until it reaches the node labelled 1. At each vertex, it chooses to move to one of the neighbouring nodes with equal probability, independently of all other choices. Find the mean number of moves required for the gerbil to reach node 1 .

Suppose now that the gerbil is intelligent, in that when it reaches a node it will not immediately return to the node from which it has just come, choosing with equal probability from all other neighbouring nodes. Express the movement of the gerbil in terms of a Markov chain whose states and transition probabilities you should specify. Find the mean number of moves until the intelligent gerbil reaches node 1 . Compare with your answer to the first part, and comment briefly.

comment
• # Paper 2, Section II, H

Suppose that $B$ is a non-empty subset of the statespace $I$ of a Markov chain $X$ with transition matrix $P$, and let $\tau \equiv \inf \left\{n \geqslant 0: X_{n} \in B\right\}$, with the convention that inf $\emptyset=\infty$. If $h_{i}=P\left(\tau<\infty \mid X_{0}=i\right)$