• # 1.I.1B

Consider the cone $K$ in $\mathbb{R}^{3}$ defined by

$x_{3}^{2}=x_{1}^{2}+x_{2}^{2}, \quad x_{3}>0 .$

Find a unit normal $\mathbf{n}=\left(n_{1}, n_{2}, n_{3}\right)$ to $K$ at the point $\mathbf{x}=\left(x_{1}, x_{2}, x_{3}\right)$ such that $n_{3} \geqslant 0$.

Show that if $\mathbf{p}=\left(p_{1}, p_{2}, p_{3}\right)$ satisfies

$p_{3}^{2} \geqslant p_{1}^{2}+p_{2}^{2}$

and $p_{3} \geqslant 0$ then

$\mathbf{p} \cdot \mathbf{n} \geqslant 0$

comment
• # 1.I.2A

Express the unit vector $\mathbf{e}_{r}$ of spherical polar coordinates in terms of the orthonormal Cartesian basis vectors $\mathbf{i}, \mathbf{j}, \mathbf{k}$.

Express the equation for the paraboloid $z=x^{2}+y^{2}$ in (i) cylindrical polar coordinates $(\rho, \phi, z)$ and (ii) spherical polar coordinates $(r, \theta, \phi)$.

In spherical polar coordinates, a surface is defined by $r^{2} \cos 2 \theta=a$, where $a$ is a real non-zero constant. Find the corresponding equation for this surface in Cartesian coordinates and sketch the surfaces in the two cases $a>0$ and $a<0$.

comment
• # 1.II.5C

Prove the Cauchy-Schwarz inequality,

$|\mathbf{x} \cdot \mathbf{y}| \leqslant|\mathbf{x}||\mathbf{y}|$

for two vectors $\mathbf{x}, \mathbf{y} \in \mathbb{R}^{n}$. Under what condition does equality hold?

Consider a pyramid in $\mathbb{R}^{n}$ with vertices at the origin $O$ and at $\mathbf{e}_{1}, \mathbf{e}_{2}, \ldots, \mathbf{e}_{n}$, where $\mathbf{e}_{1}=(1,0,0, \ldots), \mathbf{e}_{2}=(0,1,0, \ldots)$, and so on. The "base" of the pyramid is the $(n-1)$ dimensional object $B$ specified by $\left(\mathbf{e}_{1}+\mathbf{e}_{2}+\cdots+\mathbf{e}_{n}\right) \cdot \mathbf{x}=1, \mathbf{e}_{i} \cdot \mathbf{x} \geqslant 0$ for $i=1, \ldots, n$.

Find the point $C$ in $B$ equidistant from each vertex of $B$ and find the length of $O C .(C$ is the centroid of $B$.)

Show, using the Cauchy-Schwarz inequality, that this is the closest point in $B$ to the origin $O$.

Calculate the angle between $O C$ and any edge of the pyramid connected to $O$. What happens to this angle and to the length of $O C$ as $n$ tends to infinity?

comment
• # 1.II.6C

Given a vector $\mathbf{x}=\left(x_{1}, x_{2}\right) \in \mathbb{R}^{2}$, write down the vector $\mathbf{x}^{\prime}$ obtained by rotating $\mathbf{x}$ through an angle $\theta$.

Given a unit vector $\mathbf{n} \in \mathbb{R}^{3}$, any vector $\mathbf{x} \in \mathbb{R}^{3}$ may be written as $\mathbf{x}=\mathbf{x}_{\|}+\mathbf{x}_{\perp}$ where $\mathbf{x}_{\|}$is parallel to $\mathbf{n}$ and $\mathbf{x}_{\perp}$ is perpendicular to $\mathbf{n}$. Write down explicit formulae for $\mathbf{x}_{\|}$and $\mathbf{x}_{\perp}$, in terms of $\mathbf{n}$ and $\mathbf{x}$. Hence, or otherwise, show that the linear map

$\mathbf{x} \mapsto \mathbf{x}^{\prime}=(\mathbf{x} \cdot \mathbf{n}) \mathbf{n}+\cos \theta(\mathbf{x}-(\mathbf{x} \cdot \mathbf{n}) \mathbf{n})+\sin \theta(\mathbf{n} \times \mathbf{x})$

describes a rotation about $\mathbf{n}$ through an angle $\theta$, in the positive sense defined by the right hand rule.

Write equation $(*)$ in matrix form, $x_{i}^{\prime}=R_{i j} x_{j}$. Show that the trace $R_{i i}=1+2 \cos \theta$.

Given the rotation matrix

$R=\frac{1}{2}\left(\begin{array}{ccc} 1+r & 1-r & 1 \\ 1-r & 1+r & -1 \\ -1 & 1 & 2 r \end{array}\right)$

where $r=1 / \sqrt{2}$, find the two pairs $(\theta, \mathbf{n})$, with $-\pi \leqslant \theta<\pi$, giving rise to $R$. Explain why both represent the same rotation.

comment
• # 1.II.7B

(i) Let $\mathbf{u}, \mathbf{v}$ be unit vectors in $\mathbb{R}^{3}$. Write the transformation on vectors $\mathbf{x} \in \mathbb{R}^{3}$

$\mathbf{x} \mapsto(\mathbf{u} \cdot \mathbf{x}) \mathbf{u}+\mathbf{v} \times \mathbf{x}$

in matrix form as $\mathbf{x} \mapsto A \mathbf{x}$ for a matrix $A$. Find the eigenvalues in the two cases (a) when $\mathbf{u} \cdot \mathbf{v}=0$, and (b) when $\mathbf{u}, \mathbf{v}$ are parallel.

(ii) Let $\mathcal{M}$ be the set of $2 \times 2$ complex hermitian matrices with trace zero. Show that if $A \in \mathcal{M}$ there is a unique vector $\mathrm{x} \in \mathbb{R}^{3}$ such that

$A=\mathcal{R}(\mathbf{x})=\left(\begin{array}{cc} x_{3} & x_{1}-i x_{2} \\ x_{1}+i x_{2} & -x_{3} \end{array}\right)$

Show that if $U$ is a $2 \times 2$ unitary matrix, the transformation

$A \mapsto U^{-1} A U$

maps $\mathcal{M}$ to $\mathcal{M}$, and that if $U^{-1} \mathcal{R}(\mathbf{x}) U=\mathcal{R}(\mathbf{y})$, then $\|\mathbf{x}\|=\|\mathbf{y}\|$ where $\|\cdot\|$ means ordinary Euclidean length. [Hint: Consider determinants.]

comment
• # 1.II.8A

(i) State de Moivre's theorem. Use it to express $\cos 5 \theta$ as a polynomial in $\cos \theta$.

(ii) Find the two fixed points of the Möbius transformation

$z \longmapsto \omega=\frac{3 z+1}{z+3}$

that is, find the two values of $z$ for which $\omega=z$.

Given that $c \neq 0$ and $(a-d)^{2}+4 b c \neq 0$, show that a general Möbius transformation

$z \longmapsto \omega=\frac{a z+b}{c z+d}, \quad a d-b c \neq 0,$

has two fixed points $\alpha, \beta$ given by

$\alpha=\frac{a-d+m}{2 c}, \quad \beta=\frac{a-d-m}{2 c}$

where $\pm m$ are the square roots of $(a-d)^{2}+4 b c$.

Show that such a transformation can be expressed in the form

$\frac{\omega-\alpha}{\omega-\beta}=k \frac{z-\alpha}{z-\beta},$

where $k$ is a constant that you should determine.

comment

• # 1.I.3F

Let $a_{n} \in \mathbb{R}$ for $n \geqslant 1$. What does it mean to say that the infinite series $\sum_{n} a_{n}$ converges to some value $A$ ? Let $s_{n}=a_{1}+\cdots+a_{n}$ for all $n \geqslant 1$. Show that if $\sum_{n} a_{n}$ converges to some value $A$, then the sequence whose $n$-th term is

$\left(s_{1}+\cdots+s_{n}\right) / n$

converges to some value $\tilde{A}$ as $n \rightarrow \infty$. Is it always true that $A=\tilde{A}$ ? Give an example where $\left(s_{1}+\cdots+s_{n}\right) / n$ converges but $\sum_{n} a_{n}$ does not.

comment
• # 1.I.4D

Let $\sum_{n=0}^{\infty} a_{n} z^{n}$ and $\sum_{n=0}^{\infty} b_{n} z^{n}$ be power series in the complex plane with radii of convergence $R$ and $S$ respectively. Show that if $R \neq S$ then $\sum_{n=0}^{\infty}\left(a_{n}+b_{n}\right) z^{n}$ has radius of convergence $\min (R, S)$. [Any results on absolute convergence that you use should be clearly stated.]

comment
• # 1.II.10E

Prove that if the function $f$ is infinitely differentiable on an interval $(r, s)$ containing $a$, then for any $x \in(r, s)$ and any positive integer $n$ we may expand $f(x)$ in the form

$f(a)+(x-a) f^{\prime}(a)+\frac{(x-a)^{2}}{2 !} f^{\prime \prime}(a)+\cdots+\frac{(x-a)^{n}}{n !} f^{(n)}(a)+R_{n}(f, a, x),$

where the remainder term $R_{n}(f, a, x)$ should be specified explicitly in terms of $f^{(n+1)}$.

Let $p(t)$ be a nonzero polynomial in $t$, and let $f$ be the real function defined by

$f(x)=p\left(\frac{1}{x}\right) \exp \left(-\frac{1}{x^{2}}\right) \quad(x \neq 0), \quad f(0)=0 .$

Show that $f$ is differentiable everywhere and that

$f^{\prime}(x)=q\left(\frac{1}{x}\right) \exp \left(-\frac{1}{x^{2}}\right) \quad(x \neq 0), \quad f^{\prime}(0)=0,$

where $q(t)=2 t^{3} p(t)-t^{2} p^{\prime}(t)$. Deduce that $f$ is infinitely differentiable, but that there exist arbitrarily small values of $x$ for which the remainder term $R_{n}(f, 0, x)$ in the Taylor expansion of $f$ about 0 does not tend to 0 as $n \rightarrow \infty$.

comment
• # 1.II.11F

Consider a sequence $\left(a_{n}\right)_{n \geqslant 1}$ of real numbers. What does it mean to say that $a_{n} \rightarrow$ $a \in \mathbb{R}$ as $n \rightarrow \infty$ ? What does it mean to say that $a_{n} \rightarrow \infty$ as $n \rightarrow \infty$ ? What does it mean to say that $a_{n} \rightarrow-\infty$ as $n \rightarrow \infty$ ? Show that for every sequence of real numbers there exists a subsequence which converges to a value in $\mathbb{R} \cup\{\infty,-\infty\}$. [You may use the Bolzano-Weierstrass theorem provided it is clearly stated.]

Give an example of a bounded sequence $\left(a_{n}\right)_{n \geqslant 1}$ which is not convergent, but for which

$a_{n+1}-a_{n} \rightarrow 0 \quad \text { as } \quad n \rightarrow \infty$

comment
• # 1.II.12D

Let $f_{1}$ and $f_{2}$ be Riemann integrable functions on $[a, b]$. Show that $f_{1}+f_{2}$ is Riemann integrable.

Let $f$ be a Riemann integrable function on $[a, b]$ and set $f^{+}(x)=\max (f(x), 0)$. Show that $f^{+}$and $|f|$ are Riemann integrable.

Let $f$ be a function on $[a, b]$ such that $|f|$ is Riemann integrable. Is it true that $f$ is Riemann integrable? Justify your answer.

Show that if $f_{1}$ and $f_{2}$ are Riemann integrable on $[a, b]$, then so is $\max \left(f_{1}, f_{2}\right)$. Suppose now $f_{1}, f_{2}, \ldots$ is a sequence of Riemann integrable functions on $[a, b]$ and $f(x)=\sup _{n} f_{n}(x)$; is it true that $f$ is Riemann integrable? Justify your answer.

comment
• # 1.II.9E

State and prove the Intermediate Value Theorem.

Suppose that the function $f$ is differentiable everywhere in some open interval containing $[a, b]$, and that $f^{\prime}(a). By considering the functions $g$ and $h$ defined by

$g(x)=\frac{f(x)-f(a)}{x-a} \quad(a

and

$h(x)=\frac{f(b)-f(x)}{b-x} \quad(a \leqslant x

or otherwise, show that there is a subinterval $\left[a^{\prime}, b^{\prime}\right] \subseteq[a, b]$ such that

$\frac{f\left(b^{\prime}\right)-f\left(a^{\prime}\right)}{b^{\prime}-a^{\prime}}=k$

Deduce that there exists $c \in(a, b)$ with $f^{\prime}(c)=k$. [You may assume the Mean Value Theorem.]

comment