• Paper 1, Section II, H

(a) State and prove Fatou's lemma. [You may use the monotone convergence theorem without proof, provided it is clearly stated.]

(b) Show that the inequality in Fatou's lemma can be strict.

(c) Let $\left(X_{n}: n \in \mathbb{N}\right)$ and $X$ be non-negative random variables such that $X_{n} \rightarrow X$ almost surely as $n \rightarrow \infty$. Must we have $\mathbb{E} X \leqslant \sup _{n} \mathbb{E} X_{n}$ ?

comment
• Paper 2, Section II, H

Let $(E, \mathcal{E}, \mu)$ be a measure space. A function $f$ is simple if it is of the form $f=\sum_{i=1}^{N} a_{i} 1_{A_{i}}$, where $a_{i} \in \mathbb{R}, N \in \mathbb{N}$ and $A_{i} \in \mathcal{E}$.

Now let $f:(E, \mathcal{E}, \mu) \rightarrow[0, \infty]$ be a Borel-measurable map. Show that there exists a sequence $f_{n}$ of simple functions such that $f_{n}(x) \rightarrow f(x)$ for all $x \in E$ as $n \rightarrow \infty$.

Next suppose $f$ is also $\mu$-integrable. Construct a sequence $f_{n}$ of simple $\mu$-integrable functions such that $\int_{E}\left|f_{n}-f\right| d \mu \rightarrow 0$ as $n \rightarrow \infty$.

Finally, suppose $f$ is also bounded. Show that there exists a sequence $f_{n}$ of simple functions such that $f_{n} \rightarrow f$ uniformly on $E$ as $n \rightarrow \infty$.

comment
• Paper 3, Section II, $26 \mathrm{H}$

Show that random variables $X_{1}, \ldots, X_{N}$ defined on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$ are independent if and only if

$\mathbb{E}\left(\prod_{n=1}^{N} f_{n}\left(X_{n}\right)\right)=\prod_{n=1}^{N} \mathbb{E}\left(f_{n}\left(X_{n}\right)\right)$

for all bounded measurable functions $f_{n}: \mathbb{R} \rightarrow \mathbb{R}, n=1, \ldots, N$.

Now let $\left(X_{n}: n \in \mathbb{N}\right)$ be an infinite sequence of independent Gaussian random variables with zero means, $\mathbb{E} X_{n}=0$, and finite variances, $\mathbb{E} X_{n}^{2}=\sigma_{n}^{2}>0$. Show that the series $\sum_{n=1}^{\infty} X_{n}$ converges in $L^{2}(\mathbb{P})$ if and only if $\sum_{n=1}^{\infty} \sigma_{n}^{2}<\infty$.

[You may use without proof that $\mathbb{E}\left[e^{i u X_{n}}\right]=e^{-u^{2} \sigma_{n}^{2} / 2}$ for $u \in \mathbb{R}$.]

comment
• Paper 4, Section II, 26H

Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. Show that for any sequence $A_{n} \in \mathcal{F}$ satisfying $\sum_{n=1}^{\infty} \mathbb{P}\left(A_{n}\right)<\infty$ one necessarily has $\mathbb{P}\left(\limsup A_{n} A_{n}\right)=0 .$

Let $\left(X_{n}: n \in \mathbb{N}\right)$ and $X$ be random variables defined on $(\Omega, \mathcal{F}, \mathbb{P})$. Show that $X_{n} \rightarrow X$ almost surely as $n \rightarrow \infty$ implies that $X_{n} \rightarrow X$ in probability as $n \rightarrow \infty$.

Show that $X_{n} \rightarrow X$ in probability as $n \rightarrow \infty$ if and only if for every subsequence $X_{n(k)}$ there exists a further subsequence $X_{n(k(r))}$ such that $X_{n(k(r))} \rightarrow X$ almost surely as $r \rightarrow \infty$.

comment

• Paper 1, Section II, 27K

(a) Let $(X, \mathcal{F}, \nu)$ be a probability space. State the definition of the space $\mathbb{L}^{2}(X, \mathcal{F}, \nu)$. Show that it is a Hilbert space.

(b) Give an example of two real random variables $Z_{1}, Z_{2}$ that are not independent and yet have the same law.

(c) Let $Z_{1}, \ldots, Z_{n}$ be $n$ random variables distributed uniformly on $[0,1]$. Let $\lambda$ be the Lebesgue measure on the interval $[0,1]$, and let $\mathcal{B}$ be the Borel $\sigma$-algebra. Consider the expression

$D(f):=\operatorname{Var}\left[\frac{1}{n}\left(f\left(Z_{1}\right)+\ldots+f\left(Z_{n}\right)\right)-\int_{[0,1]} f d \lambda\right]$

where Var denotes the variance and $f \in \mathbb{L}^{2}([0,1], \mathcal{B}, \lambda)$.

Assume that $Z_{1}, \ldots, Z_{n}$ are pairwise independent. Compute $D(f)$ in terms of the variance $\operatorname{Var}(f):=\operatorname{Var}\left(f\left(Z_{1}\right)\right)$.

(d) Now we no longer assume that $Z_{1}, \ldots, Z_{n}$ are pairwise independent. Show that

$\sup D(f) \geqslant \frac{1}{n},$

where the supremum ranges over functions $f \in \mathbb{L}^{2}([0,1], \mathcal{B}, \lambda)$ such that $\|f\|_{2}=1$ and $\int_{[0,1]} f d \lambda=0$.

[Hint: you may wish to compute $D\left(f_{p, q}\right)$ for the family of functions $f_{p, q}=\sqrt{\frac{k}{2}}\left(1_{I_{p}}-1_{I_{q}}\right)$ where $1 \leqslant p, q \leqslant k, I_{j}=\left[\frac{j}{k}, \frac{j+1}{k}\right)$ and $1_{A}$ denotes the indicator function of the subset $\left.A .\right]$

comment
• Paper 2, Section II, $26 \mathrm{~K}$

Let $X$ be a set. Recall that a Boolean algebra $\mathcal{B}$ of subsets of $X$ is a family of subsets containing the empty set, which is stable under finite union and under taking complements. As usual, let $\sigma(\mathcal{B})$ be the $\sigma$-algebra generated by $\mathcal{B}$.

(a) State the definitions of a $\sigma$-algebra, that of a measure on a measurable space, as well as the definition of a probability measure.

(b) State Carathéodory's extension theorem.

(c) Let $(X, \mathcal{F}, \mu)$ be a probability measure space. Let $\mathcal{B} \subset \mathcal{F}$ be a Boolean algebra of subsets of $X$. Let $\mathcal{C}$ be the family of all $A \in \mathcal{F}$ with the property that for every $\epsilon>0$, there is $B \in \mathcal{B}$ such that

$\mu(A \triangle B)<\epsilon,$

where $A \triangle B$ denotes the symmetric difference of $A$ and $B$, i.e., $A \triangle B=(A \cup B) \backslash(A \cap B)$.

(i) Show that $\sigma(\mathcal{B})$ is contained in $\mathcal{C}$. Show by example that this may fail if $\mu(X)=+\infty$.

(ii) Now assume that $(X, \mathcal{F}, \mu)=\left([0,1], \mathcal{L}_{[0,1]}, m\right)$, where $\mathcal{L}_{[0,1]}$ is the $\sigma$-algebra of Lebesgue measurable subsets of $[0,1]$ and $m$ is the Lebesgue measure. Let $\mathcal{B}$ be the family of all finite unions of sub-intervals. Is it true that $\mathcal{C}$ is equal to $\mathcal{L}_{[0,1]}$ in this case? Justify your answer.

comment
• Paper 3, Section II, 26K

Let $(X, \mathcal{A}, m, T)$ be a probability measure preserving system.

(a) State what it means for $(X, \mathcal{A}, m, T)$ to be ergodic.

(b) State Kolmogorov's 0-1 law for a sequence of independent random variables. What does it imply for the canonical model associated with an i.i.d. random process?

(c) Consider the special case when $X=[0,1], \mathcal{A}$ is the $\sigma$-algebra of Borel subsets, and $T$ is the map defined as

$T x=\left\{\begin{array}{l} 2 x, \quad \text { if } x \in\left[0, \frac{1}{2}\right] \\ 2-2 x, \quad \text { if } x \in\left[\frac{1}{2}, 1\right] \end{array}\right.$

(i) Check that the Lebesgue measure $m$ on $[0,1]$ is indeed an invariant probability measure for $T$.

(ii) Let $X_{0}:=1_{\left(0, \frac{1}{2}\right)}$ and $X_{n}:=X_{0} \circ T^{n}$ for $n \geqslant 1$. Show that $\left(X_{n}\right)_{n \geqslant 0}$ forms a sequence of i.i.d. random variables on $(X, \mathcal{A}, m)$, and that the $\sigma$-algebra $\sigma\left(X_{0}, X_{1}, \ldots\right)$ is all of $\mathcal{A}$. [Hint: check first that for any integer $n \geqslant 0, T^{-n}\left(0, \frac{1}{2}\right)$ is a disjoint union of $2^{n}$ intervals of length $1 / 2^{n+1}$.]

(iii) Is $(X, \mathcal{A}, m, T)$ ergodic? Justify your answer.

comment
• Paper 4, Section II, K

(a) State and prove the strong law of large numbers for sequences of i.i.d. random variables with a finite moment of order 4 .

(b) Let $\left(X_{k}\right)_{k \geqslant 1}$ be a sequence of independent random variables such that

$\mathbb{P}\left(X_{k}=1\right)=\mathbb{P}\left(X_{k}=-1\right)=\frac{1}{2}$

Let $\left(a_{k}\right)_{k \geqslant 1}$ be a sequence of real numbers such that

$\sum_{k \geqslant 1} a_{k}^{2}<\infty$

Set

$S_{n}:=\sum_{k=1}^{n} a_{k} X_{k}$

(i) Show that $S_{n}$ converges in $\mathbb{L}^{2}$ to a random variable $S$ as $n \rightarrow \infty$. Does it converge in $\mathbb{L}^{1}$ ? Does it converge in law?

(ii) Show that $\|S\|_{4} \leqslant 3^{1 / 4}\|S\|_{2}$.

(iii) Let $\left(Y_{k}\right)_{k \geqslant 1}$ be a sequence of i.i.d. standard Gaussian random variables, i.e. each $Y_{k}$ is distributed as $\mathcal{N}(0,1)$. Show that then $\sum_{k=1}^{n} a_{k} Y_{k}$ converges in law as $n \rightarrow \infty$ to a random variable and determine the law of the limit.

comment

• Paper 1, Section II, K

Let $\mathbf{X}=\left(X_{1}, \ldots, X_{d}\right)$ be an $\mathbb{R}^{d}$-valued random variable. Given $u=\left(u_{1}, \ldots, u_{d}\right) \in$ $\mathbb{R}^{d}$ we let

$\phi_{\mathbf{X}}(u)=\mathbb{E}\left(e^{i\langle u, \mathbf{X}\rangle}\right)$

be its characteristic function, where $\langle\cdot, \cdot\rangle$ is the usual inner product on $\mathbb{R}^{d}$.

(a) Suppose $\mathbf{X}$ is a Gaussian vector with mean 0 and covariance matrix $\sigma^{2} I_{d}$, where $\sigma>0$ and $I_{d}$ is the $d \times d$ identity matrix. What is the formula for the characteristic function $\phi_{\mathbf{X}}$ in the case $d=1$ ? Derive from it a formula for $\phi_{\mathbf{X}}$ in the case $d \geqslant 2$.

(b) We now no longer assume that $\mathbf{X}$ is necessarily a Gaussian vector. Instead we assume that the $X_{i}$ 's are independent random variables and that the random vector $A \mathbf{X}$ has the same law as $\mathbf{X}$ for every orthogonal matrix $A$. Furthermore we assume that $d \geqslant 2$.

(i) Show that there exists a continuous function $f:[0,+\infty) \rightarrow \mathbb{R}$ such that

$\phi_{\mathbf{X}}(u)=f\left(u_{1}^{2}+\ldots+u_{d}^{2}\right)$

[You may use the fact that for every two vectors $u, v \in \mathbb{R}^{d}$ such that $\langle u, u\rangle=\langle v, v\rangle$ there is an orthogonal matrix $A$ such that $A u=v$. ]

(ii) Show that for all $r_{1}, r_{2} \geqslant 0$

$f\left(r_{1}+r_{2}\right)=f\left(r_{1}\right) f\left(r_{2}\right) .$

(iii) Deduce that $f$ takes values in $(0,1]$, and furthermore that there exists $\alpha \geqslant 0$ such that $f(r)=e^{-r \alpha}$, for all $r \geqslant 0$.

(iv) What must be the law of $\mathbf{X}$ ?

[Standard properties of characteristic functions from the course may be used without proof if clearly stated.]

comment
• Paper 2, Section II, K

(a) Let $\left(X_{i}, \mathcal{A}_{i}\right)$ for $i=1,2$ be two measurable spaces. Define the product $\sigma$-algebra $\mathcal{A}_{1} \otimes \mathcal{A}_{2}$ on the Cartesian product $X_{1} \times X_{2}$. Given a probability measure $\mu_{i}$ on $\left(X_{i}, \mathcal{A}_{i}\right)$ for each $i=1,2$, define the product measure $\mu_{1} \otimes \mu_{2}$. Assuming the existence of a product measure, explain why it is unique. [You may use standard results from the course if clearly stated.]

(b) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space on which the real random variables $U$ and $V$ are defined. Explain what is meant when one says that $U$ has law $\mu$. On what measurable space is the measure $\mu$ defined? Explain what it means for $U$ and $V$ to be independent random variables.

(c) Now let $X=\left[-\frac{1}{2}, \frac{1}{2}\right]$, let $\mathcal{A}$ be its Borel $\sigma$-algebra and let $\mu$ be Lebesgue measure. Give an example of a measure $\eta$ on the product $(X \times X, \mathcal{A} \otimes \mathcal{A})$ such that $\eta(X \times A)=\mu(A)=\eta(A \times X)$ for every Borel set $A$, but such that $\eta$ is not Lebesgue measure on $X \times X$.

(d) Let $\eta$ be as in part (c) and let $I, J \subset X$ be intervals of length $x$ and $y$ respectively. Show that

$x+y-1 \leqslant \eta(I \times J) \leqslant \min \{x, y\}$

(e) Let $X$ be as in part (c). Fix $d \geqslant 2$ and let $\Pi_{i}$ denote the projection $\Pi_{i}\left(x_{1}, \ldots, x_{d}\right)=\left(x_{1}, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{d}\right)$ from $X^{d}$ to $X^{d-1}$. Construct a probability measure $\eta$ on $X^{d}$, such that the image under each $\Pi_{i}$ coincides with the $(d-1)$-dimensional Lebesgue measure, while $\eta$ itself is not the $d$-dimensional Lebesgue measure. $[$ Hint: Consider the following collection of $2 d-1$ independent random variables: $U_{1}, \ldots, U_{d}$ uniformly distributed on $\left[0, \frac{1}{2}\right]$, and $\varepsilon_{1}, \ldots, \varepsilon_{d-1}$ such that $\mathbb{P}\left(\varepsilon_{i}=1\right)=\mathbb{P}\left(\varepsilon_{i}=-1\right)=\frac{1}{2}$ for each $i .]$

comment
• Paper 3, Section II, K

(a) Let $X$ and $Y$ be real random variables such that $\mathbb{E}[f(X)]=\mathbb{E}[f(Y)]$ for every compactly supported continuous function $f$. Show that $X$ and $Y$ have the same law.

(b) Given a real random variable $Z$, let $\varphi_{Z}(s)=\mathbb{E}\left(e^{i s Z}\right)$ be its characteristic function. Prove the identity

$\iint g(\varepsilon s) f(x) e^{-i s x} \varphi_{Z}(s) d s d x=\int \hat{g}(t) \mathbb{E}[f(Z-\varepsilon t)] d t$

for real $\varepsilon>0$, where is $f$ is continuous and compactly supported, and where $g$ is a Lebesgue integrable function such that $\hat{g}$ is also Lebesgue integrable, where

$\hat{g}(t)=\int g(x) e^{i t x} d x$

is its Fourier transform. Use the above identity to derive a formula for $\mathbb{E}[f(Z)]$ in terms of $\varphi_{Z}$, and recover the fact that $\varphi_{Z}$ determines the law of $Z$ uniquely.

(c) Let $X$ and $Y$ be bounded random variables such that $\mathbb{E}\left(X^{n}\right)=\mathbb{E}\left(Y^{n}\right)$ for every positive integer $n$. Show that $X$ and $Y$ have the same law.

(d) The Laplace transform $\psi_{Z}(s)$ of a non-negative random variable $Z$ is defined by the formula

$\psi_{Z}(s)=\mathbb{E}\left(e^{-s Z}\right)$

for $s \geqslant 0$. Let $X$ and $Y$ be (possibly unbounded) non-negative random variables such that $\psi_{X}(s)=\psi_{Y}(s)$ for all $s \geqslant 0$. Show that $X$ and $Y$ have the same law.

(e) Let

$f(x ; k)=1_{\{x>0\}} \frac{1}{k !} x^{k} e^{-x}$

where $k$ is a non-negative integer and $1_{\{x>0\}}$ is the indicator function of the interval $(0,+\infty)$.

Given non-negative integers $k_{1}, \ldots, k_{n}$, suppose that the random variables $X_{1}, \ldots, X_{n}$ are independent with $X_{i}$ having density function $f\left(\cdot ; k_{i}\right)$. Find the density of the random variable $X_{1}+\cdots+X_{n}$.

comment
• Paper 4, Section II, K

(a) Let $\left(X_{n}\right)_{n \geqslant 1}$ and $X$ be real random variables with finite second moment on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Assume that $X_{n}$ converges to $X$ almost surely. Show that the following assertions are equivalent:

(i) $X_{n} \rightarrow X$ in $\mathbf{L}^{2}$ as $n \rightarrow \infty$

(ii) $\mathbb{E}\left(X_{n}^{2}\right) \rightarrow \mathbb{E}\left(X^{2}\right)$ as $n \rightarrow \infty$.

(b) Suppose now that $\Omega=(0,1), \mathcal{F}$ is the Borel $\sigma$-algebra of $(0,1)$ and $\mathbb{P}$ is Lebesgue measure. Given a Borel probability measure $\mu$ on $\mathbb{R}$ we set

$X_{\mu}(\omega)=\inf \left\{x \in \mathbb{R} \mid F_{\mu}(x) \geqslant \omega\right\}$

where $F_{\mu}(x):=\mu((-\infty, x])$ is the distribution function of $\mu$ and $\omega \in \Omega$.

(i) Show that $X_{\mu}$ is a random variable on $(\Omega, \mathcal{F}, \mathbb{P})$ with law $\mu$.

(ii) Let $\left(\mu_{n}\right)_{n \geqslant 1}$ and $\nu$ be Borel probability measures on $\mathbb{R}$ with finite second moments. Show that

$\mathbb{E}\left(\left(X_{\mu_{n}}-X_{\nu}\right)^{2}\right) \rightarrow 0 \text { as } n \rightarrow \infty$

if and only if $\mu_{n}$ converges weakly to $\nu$ and $\int x^{2} d \mu_{n}(x)$ converges to $\int x^{2} d \nu(x)$ as $n \rightarrow \infty$

[You may use any theorem proven in lectures as long as it is clearly stated. Furthermore, you may use without proof the fact that $\mu_{n}$ converges weakly to $\nu$ as $n \rightarrow \infty$ if and only if $X_{\mu_{n}}$ converges to $X_{\nu}$ almost surely.]

comment

• Paper 1, Section II, J

(a) Let $X$ be a real random variable with $\mathbb{E}\left(X^{2}\right)<\infty$. Show that the variance of $X$ is equal to $\inf _{a \in \mathbb{R}}\left(\mathbb{E}(X-a)^{2}\right)$.

(b) Let $f(x)$ be the indicator function of the interval $[-1,1]$ on the real line. Compute the Fourier transform of $f$.

(c) Show that

$\int_{0}^{+\infty}\left(\frac{\sin x}{x}\right)^{2} d x=\frac{\pi}{2}$

(d) Let $X$ be a real random variable and $\widehat{\mu_{X}}$ be its characteristic function.

(i) Assume that $\left|\widehat{\mu_{X}}(u)\right|=1$ for some $u \in \mathbb{R}$. Show that there exists $\theta \in \mathbb{R}$ such that almost surely:

$u X \in \theta+2 \pi \mathbb{Z}$

(ii) Assume that $\left|\widehat{\mu_{X}}(u)\right|=\left|\widehat{\mu_{X}}(v)\right|=1$ for some real numbers $u$, $v$ not equal to 0 and such that $u / v$ is irrational. Prove that $X$ is almost surely constant. [Hint: You may wish to consider an independent copy of $X$.]

comment
• Paper 2, Section II, J

Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. Let $\left(X_{n}\right)_{n \geqslant 1}$ be a sequence of random variables with $\mathbb{E}\left(\left|X_{n}\right|^{2}\right) \leqslant 1$ for all $n \geqslant 1$.

(a) Suppose $Z$ is another random variable such that $\mathbb{E}\left(|Z|^{2}\right)<\infty$. Why is $Z X_{n}$ integrable for each $n$ ?

(b) Assume $\mathbb{E}\left(Z X_{n}\right) \underset{n \rightarrow \infty}{\longrightarrow} 0$ for every random variable $Z$ on $(\Omega, \mathcal{F}, \mathbb{P})$ such that $\mathbb{E}\left(|Z|^{2}\right)<\infty$. Show that there is a subsequence $Y_{k}:=X_{n_{k}}, k \geqslant 1$, such that

$\frac{1}{N} \sum_{k=1}^{N} Y_{k} \underset{N \rightarrow \infty}{\longrightarrow} 0 \text { in } \mathbb{L}^{2}$

(c) Assume that $X_{n} \rightarrow X$ in probability. Show that $X \in \mathbb{L}^{2}$. Show that $X_{n} \rightarrow X$ in $\mathbb{L}^{1}$. Must it converge also in $\mathbb{L}^{2} ?$ Justify your answer.

(d) Assume that the $\left(X_{n}\right)_{n \geqslant 1}$ are independent. Give a necessary and sufficient condition on the sequence $\left(\mathbb{E}\left(X_{n}\right)_{n \geqslant 1}\right)$ for the sequence

$Y_{N}=\frac{1}{N} \sum_{k=1}^{N} X_{k}$

to converge in $\mathbb{L}^{2}$.

comment
• Paper 3, Section II, J

Let $m$ be the Lebesgue measure on the real line. Recall that if $E \subseteq \mathbb{R}$ is a Borel subset, then

$m(E)=\inf \left\{\sum_{n \geqslant 1}\left|I_{n}\right|, E \subseteq \bigcup_{n \geqslant 1} I_{n}\right\},$

where the infimum is taken over all covers of $E$ by countably many intervals, and $|I|$ denotes the length of an interval $I$.

(a) State the definition of a Borel subset of $\mathbb{R}$.

(b) State a definition of a Lebesgue measurable subset of $\mathbb{R}$.

(c) Explain why the following sets are Borel and compute their Lebesgue measure:

$\mathbb{Q}, \quad \mathbb{R} \backslash \mathbb{Q}, \quad \bigcap_{n \geqslant 2}\left[\frac{1}{n}, n\right] .$

(d) State the definition of a Borel measurable function $f: \mathbb{R} \rightarrow \mathbb{R}$.

(e) Let $f$ be a Borel measurable function $f: \mathbb{R} \rightarrow \mathbb{R}$. Is it true that the subset of all $x \in \mathbb{R}$ where $f$ is continuous at $x$ is a Borel subset? Justify your answer.

(f) Let $E \subseteq[0,1]$ be a Borel subset with $m(E)=1 / 2+\alpha, \alpha>0$. Show that

$E-E:=\{x-y: x, y \in E\}$

contains the interval $(-2 \alpha, 2 \alpha)$.

(g) Let $E \subseteq \mathbb{R}$ be a Borel subset such that $m(E)>0$. Show that for every $\varepsilon>0$, there exists $a in $\mathbb{R}$ such that

$m(E \cap(a, b))>(1-\varepsilon) m((a, b)) .$

Deduce that $E-E$ contains an open interval around 0 .

comment
• Paper 4, Section II, J

Let $(X, \mathcal{A})$ be a measurable space. Let $T: X \rightarrow X$ be a measurable map, and $\mu$ a probability measure on $(X, \mathcal{A})$.

(a) State the definition of the following properties of the system $(X, \mathcal{A}, \mu, T)$ :

(i) $\mu$ is T-invariant.

(ii) $T$ is ergodic with respect to $\mu$.

(b) State the pointwise ergodic theorem.

(c) Give an example of a probability measure preserving system $(X, \mathcal{A}, \mu, T)$ in which $\operatorname{Card}\left(T^{-1}\{x\}\right)>1$ for $\mu$-a.e. $x$.

(d) Assume $X$ is finite and $\mathcal{A}$ is the boolean algebra of all subsets of $X$. Suppose that $\mu$ is a $T$-invariant probability measure on $X$ such that $\mu(\{x\})>0$ for all $x \in X$. Show that $T$ is a bijection.

(e) Let $X=\mathbb{N}$, the set of positive integers, and $\mathcal{A}$ be the $\sigma$-algebra of all subsets of $X$. Suppose that $\mu$ is a $T$-invariant ergodic probability measure on $X$. Show that there is a finite subset $Y \subseteq X$ with $\mu(Y)=1$.

comment

• Paper 1, Section II, J

(a) Give the definition of the Borel $\sigma$-algebra on $\mathbb{R}$ and a Borel function $f: E \rightarrow \mathbb{R}$ where $(E, \mathcal{E})$ is a measurable space.

(b) Suppose that $\left(f_{n}\right)$ is a sequence of Borel functions which converges pointwise to a function $f$. Prove that $f$ is a Borel function.

(c) Let $R_{n}:[0,1) \rightarrow \mathbb{R}$ be the function which gives the $n$th binary digit of a number in $[0,1$ ) (where we do not allow for the possibility of an infinite sequence of 1 s). Prove that $R_{n}$ is a Borel function.

(d) Let $f:[0,1)^{2} \rightarrow[0, \infty]$ be the function such that $f(x, y)$ for $x, y \in[0,1)^{2}$ is equal to the number of digits in the binary expansions of $x, y$ which disagree. Prove that $f$ is non-negative measurable.

(e) Compute the Lebesgue measure of $f^{-1}([0, \infty))$, i.e. the set of pairs of numbers in $[0,1)$ whose binary expansions disagree in a finite number of digits.

comment
• Paper 2, Section II, J

(a) Give the definition of the Fourier transform $\widehat{f}$ of a function $f \in L^{1}\left(\mathbb{R}^{d}\right)$.

(b) Explain what it means for Fourier inversion to hold.

(c) Prove that Fourier inversion holds for $g_{t}(x)=(2 \pi t)^{-d / 2} e^{-\|x\|^{2} /(2 t)}$. Show all of the steps in your computation. Deduce that Fourier inversion holds for Gaussian convolutions, i.e. any function of the form $f * g_{t}$ where $t>0$ and $f \in L^{1}\left(\mathbb{R}^{d}\right)$.

(d) Prove that any function $f$ for which Fourier inversion holds has a bounded, continuous version. In other words, there exists $g$ bounded and continuous such that $f(x)=g(x)$ for a.e. $x \in \mathbb{R}^{d}$.

(e) Does Fourier inversion hold for $f=\mathbf{1}_{[0,1]}$ ?

comment
• Paper 3, Section II, J

(a) Suppose that $\mathcal{X}=\left(X_{n}\right)$ is a sequence of random variables on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Give the definition of what it means for $\mathcal{X}$ to be uniformly integrable.

(b) State and prove Hölder's inequality.

(c) Explain what it means for a family of random variables to be $L^{p}$ bounded. Prove that an $L^{p}$ bounded sequence is uniformly integrable provided $p>1$.

(d) Prove or disprove: every sequence which is $L^{1}$ bounded is uniformly integrable.

comment
• Paper 4, Section II, J

(a) Suppose that $(E, \mathcal{E}, \mu)$ is a finite measure space and $\theta: E \rightarrow E$ is a measurable map. Prove that $\mu_{\theta}(A)=\mu\left(\theta^{-1}(A)\right)$ defines a measure on $(E, \mathcal{E})$.

(b) Suppose that $\mathcal{A}$ is a $\pi$-system which generates $\mathcal{E}$. Using Dynkin's lemma, prove that $\theta$ is measure-preserving if and only if $\mu_{\theta}(A)=\mu(A)$ for all $A \in \mathcal{A}$.

(c) State Birkhoff's ergodic theorem and the maximal ergodic lemma.

(d) Consider the case $(E, \mathcal{E}, \mu)=([0,1), \mathcal{B}([0,1)), \mu)$ where $\mu$ is Lebesgue measure on $[0,1)$. Let $\theta:[0,1) \rightarrow[0,1)$ be the following map. If $x=\sum_{n=1}^{\infty} 2^{-n} \omega_{n}$ is the binary expansion of $x$ (where we disallow infinite sequences of $1 \mathrm{~s}$ ), then $\theta(x)=$ $\sum_{n=1}^{\infty} 2^{-n}\left(\omega_{n-1} \mathbf{1}_{n \in E}+\omega_{n+1} \mathbf{1}_{n \in O}\right)$ where $E$ and $O$ are respectively the even and odd elements of $\mathbb{N}$.

(i) Prove that $\theta$ is measure-preserving. [You may assume that $\theta$ is measurable.]

(ii) Prove or disprove: $\theta$ is ergodic.

comment

• Paper 1, Section II, J

Throughout this question $(E, \mathcal{E}, \mu)$ is a measure space and $\left(f_{n}\right), f$ are measurable functions.

(a) Give the definitions of pointwise convergence, pointwise a.e. convergence, and convergence in measure.

(b) If $f_{n} \rightarrow f$ pointwise a.e., does $f_{n} \rightarrow f$ in measure? Give a proof or a counterexample.

(c) If $f_{n} \rightarrow f$ in measure, does $f_{n} \rightarrow f$ pointwise a.e.? Give a proof or a counterexample.

(d) Now suppose that $(E, \mathcal{E})=([0,1], \mathcal{B}([0,1]))$ and that $\mu$ is Lebesgue measure on $[0,1]$. Suppose $\left(f_{n}\right)$ is a sequence of Borel measurable functions on $[0,1]$ which converges pointwise a.e. to $f$.

(i) For each $n, k$ let $E_{n, k}=\bigcup_{m \geqslant n}\left\{x:\left|f_{m}(x)-f(x)\right|>1 / k\right\}$. Show that $\lim _{n \rightarrow \infty} \mu\left(E_{n, k}\right)=0$ for each $k \in \mathbb{N}$.

(ii) Show that for every $\epsilon>0$ there exists a set $A$ with $\mu(A)<\epsilon$ so that $f_{n} \rightarrow f$ uniformly on $[0,1] \backslash A$.

(iii) Does (ii) hold with $[0,1]$ replaced by $\mathbb{R}$ ? Give a proof or a counterexample.

comment
• Paper 2, Section II, J

(a) State Jensen's inequality. Give the definition of $\|\cdot\|_{L^{p}}$ and the space $L^{p}$ for $1. If $\|f-g\|_{L^{p}}=0$, is it true that $f=g$ ? Justify your answer. State and prove Hölder's inequality using Jensen's inequality.

(b) Suppose that $(E, \mathcal{E}, \mu)$ is a finite measure space. Show that if $1 and $f \in L^{p}(E)$ then $f \in L^{q}(E)$. Give the definition of $\|\cdot\|_{L^{\infty}}$ and show that $\|f\|_{L^{p}} \rightarrow\|f\|_{L^{\infty}}$ as $p \rightarrow \infty$.

(c) Suppose that $1. Show that if $f$ belongs to both $L^{p}(\mathbb{R})$ and $L^{q}(\mathbb{R})$, then $f \in L^{r}(\mathbb{R})$ for any $r \in[q, p]$. If $f \in L^{p}(\mathbb{R})$, must we have $f \in L^{q}(\mathbb{R})$ ? Give a proof or a counterexample.

comment
• Paper 3, Section II, J

(a) Define the Borel $\sigma$-algebra $\mathcal{B}$ and the Borel functions.

(b) Give an example with proof of a set in $[0,1]$ which is not Lebesgue measurable.

(c) The Cantor set $\mathcal{C}$ is given by

$\mathcal{C}=\left\{\sum_{k=1}^{\infty} \frac{a_{k}}{3^{k}}:\left(a_{k}\right) \text { is a sequence with } a_{k} \in\{0,2\} \text { for all } k\right\}$

(i) Explain why $\mathcal{C}$ is Lebesgue measurable.

(ii) Compute the Lebesgue measure of $\mathcal{C}$.

(iii) Is every subset of $\mathcal{C}$ Lebesgue measurable?

(iv) Let $f:[0,1] \rightarrow \mathcal{C}$ be the function given by

$f(x)=\sum_{k=1}^{\infty} \frac{2 a_{k}}{3^{k}} \quad \text { where } \quad a_{k}=\left\lfloor 2^{k} x\right\rfloor-2\left\lfloor 2^{k-1} x\right\rfloor$

Explain why $f$ is a Borel function.

(v) Using the previous parts, prove the existence of a Lebesgue measurable set which is not Borel.

comment
• Paper 4, Section II, J

Give the definitions of the convolution $f * g$ and of the Fourier transform $\widehat{f}$ of $f$, and show that $\widehat{f * g}=\widehat{f} \widehat{g}$. State what it means for Fourier inversion to hold for a function $f$.

State the Plancherel identity and compute the $L^{2}$ norm of the Fourier transform of the function $f(x)=e^{-x} \mathbf{1}_{[0,1]}$.

Suppose that $\left(f_{n}\right), f$ are functions in $L^{1}$ such that $f_{n} \rightarrow f$ in $L^{1}$ as $n \rightarrow \infty$. Show that $\widehat{f}_{n} \rightarrow \widehat{f}$ uniformly.

Give the definition of weak convergence, and state and prove the Central Limit Theorem.

comment

• Paper 1, Section II, J

(a) Define the following concepts: a $\pi$-system, a $d$-system and a $\sigma$-algebra.

(b) State the Dominated Convergence Theorem.

(c) Does the set function

$\mu(A)= \begin{cases}0 & \text { for } A \text { bounded } \\ 1 & \text { for } A \text { unbounded }\end{cases}$

furnish an example of a Borel measure?

(d) Suppose $g:[0,1] \rightarrow[0,1]$ is a measurable function. Let $f:[0,1] \rightarrow \mathbb{R}$ be continuous with $f(0) \leqslant f(1)$. Show that the limit

$\lim _{n \rightarrow \infty} \int_{0}^{1} f\left(g(x)^{n}\right) d x$

exists and lies in the interval $[f(0), f(1)]$

comment
• Paper 2, Section II, J

(a) Let $(E, \mathcal{E}, \mu)$ be a measure space, and let $1 \leqslant p<\infty$. What does it mean to say that $f$ belongs to $L^{p}(E, \mathcal{E}, \mu)$ ?

(b) State Hölder's inequality.

(c) Consider the measure space of the unit interval endowed with Lebesgue measure. Suppose $f \in L^{2}(0,1)$ and let $0<\alpha<1 / 2$.

(i) Show that for all $x \in \mathbb{R}$,

$\int_{0}^{1}|f(y)||x-y|^{-\alpha} d y<\infty$

(ii) For $x \in \mathbb{R}$, define

$g(x)=\int_{0}^{1} f(y)|x-y|^{-\alpha} d y$

Show that for $x \in \mathbb{R}$ fixed, the function $g$ satisfies

$|g(x+h)-g(x)| \leqslant\|f\|_{2} \cdot(I(h))^{1 / 2},$

where

$I(h)=\int_{0}^{1}\left(|x+h-y|^{-\alpha}-|x-y|^{-\alpha}\right)^{2} d y .$

(iii) Prove that $g$ is a continuous function. [Hint: You may find it helpful to split the integral defining $I(h)$ into several parts.]

comment
• Paper 3 , Section II, J

(a) Let $(E, \mathcal{E}, \mu)$ be a measure space. What does it mean to say that $T: E \rightarrow E$ is a measure-preserving transformation? What does it mean to say that a set $A \in \mathcal{E}$ is invariant under $T$ ? Show that the class of invariant sets forms a $\sigma$-algebra.

(b) Take $E$ to be $[0,1)$ with Lebesgue measure on its Borel $\sigma$-algebra. Show that the baker's map $T:[0,1) \rightarrow[0,1)$ defined by

$T(x)=2 x-\lfloor 2 x\rfloor$

is measure-preserving.

(c) Describe in detail the construction of the canonical model for sequences of independent random variables having a given distribution $m$.

Define the Bernoulli shift map and prove it is a measure-preserving ergodic transformation.

[You may use without proof other results concerning sequences of independent random variables proved in the course, provided you state these clearly.]

comment
• Paper 4, Section II, J

(a) State Fatou's lemma.

(b) Let $X$ be a random variable on $\mathbb{R}^{d}$ and let $\left(X_{k}\right)_{k=1}^{\infty}$ be a sequence of random variables on $\mathbb{R}^{d}$. What does it mean to say that $X_{k} \rightarrow X$ weakly?

State and prove the Central Limit Theorem for i.i.d. real-valued random variables. [You may use auxiliary theorems proved in the course provided these are clearly stated.]

(c) Let $X$ be a real-valued random variable with characteristic function $\varphi$. Let $\left(h_{n}\right)_{n=1}^{\infty}$ be a sequence of real numbers with $h_{n} \neq 0$ and $h_{n} \rightarrow 0$. Prove that if we have

$\liminf _{n \rightarrow \infty} \frac{2 \varphi(0)-\varphi\left(-h_{n}\right)-\varphi\left(h_{n}\right)}{h_{n}^{2}}<\infty$

then $\mathbb{E}\left[X^{2}\right]<\infty$

comment

• Paper 1, Section II, $26 \mathrm{~K}$

What is meant by the Borel $\sigma$-algebra on the real line $\mathbb{R}$ ?

Define the Lebesgue measure of a Borel subset of $\mathbb{R}$ using the concept of outer measure.

Let $\mu$ be the Lebesgue measure on $\mathbb{R}$. Show that, for any Borel set $B$ which is contained in the interval $[0,1]$, and for any $\varepsilon>0$, there exist $n \in \mathbb{N}$ and disjoint intervals $I_{1}, \ldots, I_{n}$ contained in $[0,1]$ such that, for $A=I_{1} \cup \cdots \cup I_{n}$, we have

$\mu(A \triangle B) \leqslant \varepsilon$

where $A \triangle B$ denotes the symmetric difference $(A \backslash B) \cup(B \backslash A)$.

Show that there does not exist a Borel set $B$ contained in $[0,1]$ such that, for all intervals $I$ contained in $[0,1]$,

$\mu(B \cap I)=\mu(I) / 2$

comment
• Paper 2, Section II, $26 K$

State and prove the monotone convergence theorem.

Let $\left(E_{1}, \mathcal{E}_{1}, \mu_{1}\right)$ and $\left(E_{2}, \mathcal{E}_{2}, \mu_{2}\right)$ be finite measure spaces. Define the product $\sigma$-algebra $\mathcal{E}=\mathcal{E}_{1} \otimes \mathcal{E}_{2}$ on $E_{1} \times E_{2}$.

Define the product measure $\mu=\mu_{1} \otimes \mu_{2}$ on $\mathcal{E}$, and show carefully that $\mu$ is countably additive.

[You may use without proof any standard facts concerning measurability provided these are clearly stated.]

comment
• Paper 3, Section II, K

(i) Let $(E, \mathcal{E}, \mu)$ be a measure space. What does it mean to say that a function $\theta: E \rightarrow E$ is a measure-preserving transformation?

What does it mean to say that $\theta$ is ergodic?

State Birkhoff's almost everywhere ergodic theorem.

(ii) Consider the set $E=(0,1]^{2}$ equipped with its Borel $\sigma$-algebra and Lebesgue measure. Fix an irrational number $a \in(0,1]$ and define $\theta: E \rightarrow E$ by

$\theta\left(x_{1}, x_{2}\right)=\left(x_{1}+a, x_{2}+a\right)$

where addition in each coordinate is understood to be modulo 1 . Show that $\theta$ is a measurepreserving transformation. Is $\theta$ ergodic? Justify your answer.

Let $f$ be an integrable function on $E$ and let $\bar{f}$ be the invariant function associated with $f$ by Birkhoff's theorem. Write down a formula for $\bar{f}$ in terms of $f$. [You are not expected to justify this answer.]

comment
• Paper 4, Section II, K

Let $\left(X_{n}: n \in \mathbb{N}\right)$ be a sequence of independent identically distributed random variables. Set $S_{n}=X_{1}+\cdots+X_{n}$.

(i) State the strong law of large numbers in terms of the random variables $X_{n}$.

(ii) Assume now that the $X_{n}$ are non-negative and that their expectation is infinite. Let $R \in(0, \infty)$. What does the strong law of large numbers say about the limiting behaviour of $S_{n}^{R} / n$, where $S_{n}^{R}=\left(X_{1} \wedge R\right)+\cdots+\left(X_{n} \wedge R\right)$ ?

Deduce that $S_{n} / n \rightarrow \infty$ almost surely.

Show that

$\sum_{n=0}^{\infty} \mathbb{P}\left(X_{n} \geqslant n\right)=\infty$

Show that $X_{n} \geqslant R n$ infinitely often almost surely.

(iii) Now drop the assumption that the $X_{n}$ are non-negative but continue to assume that $\mathbb{E}\left(\left|X_{1}\right|\right)=\infty$. Show that, almost surely,

$\limsup _{n \rightarrow \infty}\left|S_{n}\right| / n=\infty$

comment

• Paper 1, Section II, $26 \mathrm{~K}$

State Dynkin's $\pi$-system $/ d$-system lemma.

Let $\mu$ and $\nu$ be probability measures on a measurable space $(E, \mathcal{E})$. Let $\mathcal{A}$ be a $\pi$-system on $E$ generating $\mathcal{E}$. Suppose that $\mu(A)=\nu(A)$ for all $A \in \mathcal{A}$. Show that $\mu=\nu$.

What does it mean to say that a sequence of random variables is independent?

Let $\left(X_{n}: n \in \mathbb{N}\right)$ be a sequence of independent random variables, all uniformly distributed on $[0,1]$. Let $Y$ be another random variable, independent of $\left(X_{n}: n \in \mathbb{N}\right)$. Define random variables $Z_{n}$ in $[0,1]$ by $Z_{n}=\left(X_{n}+Y\right) \bmod 1$. What is the distribution of $Z_{1}$ ? Justify your answer.

Show that the sequence of random variables $\left(Z_{n}: n \in \mathbb{N}\right)$ is independent.

comment
• Paper 2, Section II, $26 \mathrm{~K}$

Let $\left(f_{n}: n \in \mathbb{N}\right)$ be a sequence of non-negative measurable functions defined on a measure space $(E, \mathcal{E}, \mu)$. Show that $\liminf _{n} f_{n}$ is also a non-negative measurable function.

State the Monotone Convergence Theorem.

State and prove Fatou's Lemma.

Let $\left(f_{n}: n \in \mathbb{N}\right)$ be as above. Suppose that $f_{n}(x) \rightarrow f(x)$ as $n \rightarrow \infty$ for all $x \in E$. Show that

$\mu\left(\min \left\{f_{n}, f\right\}\right) \rightarrow \mu(f) .$

Deduce that, if $f$ is integrable and $\mu\left(f_{n}\right) \rightarrow \mu(f)$, then $f_{n}$ converges to $f$ in $L^{1}$. [Still assume that $f_{n}$ and $f$ are as above.]

comment
• Paper 3, Section II, $25 K$

Let $X$ be an integrable random variable with $\mathbb{E}(X)=0$. Show that the characteristic function $\phi_{X}$ is differentiable with $\phi_{X}^{\prime}(0)=0$. [You may use without proof standard convergence results for integrals provided you state them clearly.]

Let $\left(X_{n}: n \in \mathbb{N}\right)$ be a sequence of independent random variables, all having the same distribution as $X$. Set $S_{n}=X_{1}+\cdots+X_{n}$. Show that $S_{n} / n \rightarrow 0$ in distribution. Deduce that $S_{n} / n \rightarrow 0$ in probability. [You may not use the Strong Law of Large Numbers.]

comment
• Paper 4, Section II, K

State Birkhoff's almost-everywhere ergodic theorem.

Let $\left(X_{n}: n \in \mathbb{N}\right)$ be a sequence of independent random variables such that

$\mathbb{P}\left(X_{n}=0\right)=\mathbb{P}\left(X_{n}=1\right)=1 / 2$

Define for $k \in \mathbb{N}$

$Y_{k}=\sum_{n=1}^{\infty} X_{k+n-1} / 2^{n}$

What is the distribution of $Y_{k} ? \quad$ Show that the random variables $Y_{1}$ and $Y_{2}$ are not independent.

Set $S_{n}=Y_{1}+\cdots+Y_{n}$. Show that $S_{n} / n$ converges as $n \rightarrow \infty$ almost surely and determine the limit. [You may use without proof any standard theorem provided you state it clearly.]

comment

• Paper 1, Section II, J

Carefully state and prove Jensen's inequality for a convex function $c: I \rightarrow \mathbb{R}$, where $I \subseteq \mathbb{R}$ is an interval. Assuming that $c$ is strictly convex, give necessary and sufficient conditions for the inequality to be strict.

Let $\mu$ be a Borel probability measure on $\mathbb{R}$, and suppose $\mu$ has a strictly positive probability density function $f_{0}$ with respect to Lebesgue measure. Let $\mathcal{P}$ be the family of all strictly positive probability density functions $f$ on $\mathbb{R}$ with respect to Lebesgue measure such that $\log \left(f / f_{0}\right) \in L^{1}(\mu)$. Let $X$ be a random variable with distribution $\mu$. Prove that the mapping

$f \mapsto \mathbb{E}\left[\log \frac{f}{f_{0}}(X)\right]$

has a unique maximiser over $\mathcal{P}$, attained when $f=f_{0}$ almost everywhere.

comment
• Paper 2, Section II, J

The Fourier transform of a Lebesgue integrable function $f \in L^{1}(\mathbb{R})$ is given by

$\hat{f}(u)=\int_{\mathbb{R}} f(x) e^{i x u} d \mu(x)$

where $\mu$ is Lebesgue measure on the real line. For $f(x)=e^{-a x^{2}}, x \in \mathbb{R}, a>0$, prove that

$\hat{f}(u)=\sqrt{\frac{\pi}{a}} e^{-\frac{u^{2}}{4 a}}$

[You may use properties of derivatives of Fourier transforms without proof provided they are clearly stated, as well as the fact that $\phi(x)=(2 \pi)^{-1 / 2} e^{-x^{2} / 2}$ is a probability density function.]

State and prove the almost everywhere Fourier inversion theorem for Lebesgue integrable functions on the real line. [You may use standard results from the course, such as the dominated convergence and Fubini's theorem. You may also use that $g_{t} * f(x):=\int_{\mathbb{R}} g_{t}(x-y) f(y) d y$ where $g_{t}(z)=t^{-1} \phi(z / t), t>0$, converges to $f$ in $L^{1}(\mathbb{R})$ as $t \rightarrow 0$ whenever $\left.f \in L^{1}(\mathbb{R}) .\right]$

The probability density function of a Gamma distribution with scalar parameters $\lambda>0, \alpha>0$ is given by

$f_{\alpha, \lambda}(x)=\lambda e^{-\lambda x}(\lambda x)^{\alpha-1} 1_{[0, \infty)}(x)$

Let $0<\alpha<1, \lambda>0$. Is $\widehat{f_{\alpha, \lambda}}$ integrable?

comment
• Paper 3, Section II, J

Carefully state and prove the first and second Borel-Cantelli lemmas.

Now let $\left(A_{n}: n \in \mathbb{N}\right)$ be a sequence of events that are pairwise independent; that is, $\mathbb{P}\left(A_{n} \cap A_{m}\right)=\mathbb{P}\left(A_{n}\right) \mathbb{P}\left(A_{m}\right)$ whenever $m \neq n$. For $N \geqslant 1$, let $S_{N}=\sum_{n=1}^{N} 1_{A_{n}}$. Show that $\operatorname{Var}\left(S_{N}\right) \leqslant \mathbb{E}\left(S_{N}\right)$.

Using Chebyshev's inequality or otherwise, deduce that if $\sum_{n=1}^{\infty} \mathbb{P}\left(A_{n}\right)=\infty$, then $\lim _{N \rightarrow \infty} S_{N}=\infty$ almost surely. Conclude that $\mathbb{P}\left(A_{n}\right.$ infinitely often $)=1 .$

comment
• Paper 4, Section II, J

State and prove Fatou's lemma. [You may use the monotone convergence theorem.]

For $(E, \mathcal{E}, \mu)$ a measure space, define $L^{1}:=L^{1}(E, \mathcal{E}, \mu)$ to be the vector space of $\mu$ integrable functions on $E$, where functions equal almost everywhere are identified. Prove that $L^{1}$ is complete for the norm $\mathrm{\parallel }\cdot {\mathrm{\parallel }}_{}$