• # Paper 1, Section I, $3 K$

Let $C$ be an $[n, m, d]$ code. Define the parameters $n, m$ and $d$. In each of the following cases define the new code and give its parameters.

(i) $C^{+}$is the parity extension of $C$.

(ii) $C^{-}$is the punctured code (assume $n \geqslant 2$ ).

(iii) $\bar{C}$ is the shortened code (assume $n \geqslant 2$ ).

Let $C=\{000,100,010,001,110,101,011,111\}$. Suppose the parity extension of $C$ is transmitted through a binary symmetric channel where $p$ is the probability of a single-bit error in the channel. Calculate the probability that an error in the transmission of a single codeword is not noticed.

comment
• # Paper 1, Section II, $11 K$

Let $\Sigma_{1}=\left\{\mu_{1}, \ldots, \mu_{N}\right\}$ be a finite alphabet and $X$ a random variable that takes each value $\mu_{i}$ with probability $p_{i}$. Define the entropy $H(X)$ of $X$.

Suppose $\Sigma_{2}=\{0,1\}$ and $c: \Sigma_{1} \rightarrow \Sigma_{2}^{*}$ is a decipherable code. Write down an expression for the expected word length $E(S)$ of $c$.

Prove that the minimum expected word length $S^{*}$ of a decipherable code $c: \Sigma_{1} \rightarrow \Sigma_{2}^{*}$ satisfies

$H(X) \leqslant S^{*}

[You can use Kraft's and Gibbs' inequalities as long as they are clearly stated.]

Suppose a decipherable binary code has word lengths $s_{1}, \ldots, s_{N}$. Show that

$N \log N \leqslant s_{1}+\cdots+s_{N} .$

Suppose $X$ is a source that emits $N$ sourcewords $a_{1}, \ldots, a_{N}$ and $p_{i}$ is the probability that $a_{i}$ is emitted, where $p_{1} \geqslant p_{2} \geqslant \cdots \geqslant p_{N}$. Let $b_{1}=0$ and $b_{i}=\sum_{j=1}^{i-1} p_{j}$ for $2 \leqslant i \leqslant N$. Let $s_{i}=\left\lceil-\log p_{i}\right\rceil$ for $1 \leqslant i \leqslant N$. Now define a code $c$ by $c\left(a_{i}\right)=b_{i}^{*}$ where $b_{i}^{*}$ is the (fractional part of the) binary expansion of $b_{i}$ to $s_{i}$ decimal places. Prove that this defines a decipherable code.

What does it mean for a code to be optimal? Is the code $c$ defined in the previous paragraph in terms of the $b_{i}^{*}$ necessarily optimal? Justify your answer.

comment
• # Paper 2, Section I, K

State Shannon's noisy coding theorem for a binary symmetric channel, defining the terms involved.

Suppose a channel matrix, with output alphabet of size $n$, is such that the entries in each row are the elements of the set $\left\{p_{1}, \ldots, p_{n}\right\}$ in some order. Further suppose that all columns are permutations of one another. Show that the channel's information capacity $C$ is given by

$C=\log n+\sum_{i=1}^{n} p_{i} \log p_{i}$

Show that the information capacity of the channel matrix

$\left(\begin{array}{cccc} \frac{1}{3} & \frac{1}{3} & \frac{1}{6} & \frac{1}{6} \\ \frac{1}{6} & \frac{1}{6} & \frac{1}{3} & \frac{1}{3} \end{array}\right)$

is given by $C=\frac{5}{3}-\log 3$.

comment
• # Paper 2, Section II, $12 K$

(a) Define what it means to say that $C$ is a binary cyclic code. Explain the bijection between the set of binary cyclic codes of length $n$ and the factors of $X^{n}-1$ in $\mathbb{F}_{2}[X]$.

(b) What is a linear feedback shift register?

Suppose that $M: \mathbb{F}_{2}^{d} \rightarrow \mathbb{F}_{2}^{d}$ is a linear feedback shift register. Further suppose $\mathbf{0} \neq \mathbf{x} \in \mathbb{F}_{2}^{d}$ and $k$ is a positive integer such that $M^{k} \mathbf{x}=\mathbf{x}$. Let $H$ be the $d \times k$ matrix $\left(\mathbf{x}, M \mathbf{x}, \ldots, M^{k-1} \mathbf{x}\right)$. Considering $H$ as a parity check matrix of a code $C$, show that $C$ is a binary cyclic code.

(c) Suppose that $C$ is a binary cyclic code. Prove that, if $C$ does not contain the codeword $11 . . .1$, then all codewords in $C$ have even weight.

comment
• # Paper 3, Section I, K

Let $d \geqslant 2$. Define the Hamming code $C$ of length $2^{d}-1$. Explain what it means to be a perfect code and show that $C$ is a perfect code.

Suppose you are using the Hamming code of length $2^{d}-1$ and you receive the message $111 \ldots 10$ of length $2^{d}-1$. How would you decode this message using minimum distance decoding? Explain why this leads to correct decoding if at most one channel error has occurred.

comment
• # Paper 4 , Section I, $3 K$

Describe the Rabin scheme for coding a message $x$ as $x^{2}$ modulo a certain integer $N$.

Describe the RSA encryption scheme with public key $(N, e)$ and private key $d$.

[In both cases you should explain how you encrypt and decrypt.]

Give an advantage and a disadvantage that the Rabin scheme has over the RSA scheme.

comment

• # Paper 1, Section I, I

(a) Briefly describe the methods of Shannon-Fano and of Huffman for the construction of prefix-free binary codes.

(b) In this part you are given that $-\log _{2}(1 / 10) \approx 3.32,-\log _{2}(2 / 10) \approx 2.32$, $-\log _{2}(3 / 10) \approx 1.74$ and $-\log _{2}(4 / 10) \approx 1.32$.

Let $\mathcal{A}=\{1,2,3,4\}$. For $k \in \mathcal{A}$, suppose that the probability of choosing $k$ is $k / 10$.

(i) Find a Shannon-Fano code for this system and the expected word length.

(ii) Find a Huffman code for this system and the expected word length.

(iii) Verify that Shannon's noiseless coding theorem is satisfied in each case.

comment
• # Paper 1, Section II, I

(a) What does it mean to say that a binary code has length $n$, size $M$ and minimum distance d?

Let $A(n, d)$ be the largest value of $M$ for which there exists a binary $[n, M, d]$-code.

(i) Show that $A(n, 1)=2^{n}$.

(ii) Suppose that $n, d>1$. Show that if a binary $[n, M, d]$-code exists, then a binary $[n-1, M, d-1]$-code exists. Deduce that $A(n, d) \leqslant A(n-1, d-1)$.

(iii) Suppose that $n, d \geqslant 1$. Show that $A(n, d) \leqslant 2^{n-d+1}$.

(b) (i) For integers $M$ and $N$ with $0 \leqslant N \leqslant M$, show that

$N(M-N) \leqslant\left\{\begin{array}{cl} M^{2} / 4, & \text { if } M \text { is even }, \\ \left(M^{2}-1\right) / 4, & \text { if } M \text { is odd } \end{array}\right.$

For the remainder of this question, suppose that $C$ is a binary $[n, M, d]$-code. For codewords $x=\left(x_{1} \ldots x_{n}\right), y=\left(y_{1} \ldots y_{n}\right) \in C$ of length $n$, we define $x+y$ to be the word $\left(\left(x_{1}+y_{1}\right) \ldots\left(x_{n}+y_{n}\right)\right)$ with addition modulo $2 .$

(ii) Explain why the Hamming distance $d(x, y)$ is the number of 1 s in $x+y$.

(iii) Now we construct an $\left(\begin{array}{c}M \\ 2\end{array}\right) \times n$ array $A$ whose rows are all the words $x+y$ for pairs of distinct codewords $x, y$. Show that the number of $1 \mathrm{~s}$ in $A$ is at most

$\begin{cases}n M^{2} / 4, & \text { if } M \text { is even } \\ n\left(M^{2}-1\right) / 4, & \text { if } M \text { is odd }\end{cases}$

Show also that the number of $1 \mathrm{~s}$ in $A$ is at least $d\left(\begin{array}{c}M \\ 2\end{array}\right)$.

(iv) Using the inequalities derived in part(b) (iii), deduce that if $d$ is even and $n<2 d$ then

$A(n, d) \leqslant 2\left\lfloor\frac{d}{2 d-n}\right\rfloor$

comment
• # Paper 2, Section I, I

(a) Define the information capacity of a discrete memoryless channel (DMC).

(b) Consider a DMC where there are two input symbols, $A$ and $B$, and three output symbols, $A, B$ and $\star$. Suppose each input symbol is left intact with probability $1 / 2$, and transformed into a $\star$ with probability $1 / 2$.

(i) Write down the channel matrix, and calculate the information capacity.

(ii) Now suppose the output is further processed by someone who cannot distinguish between $A$ and $\star$, so that the channel matrix becomes

$\left(\begin{array}{cc} 1 & 0 \\ 1 / 2 & 1 / 2 \end{array}\right)$

Calculate the new information capacity.

comment
• # Paper 2, Section II, I

Let $C$ be the Hamming $(n, n-d)$ code of weight 3 , where $n=2^{d}-1, d>1$. Let $H$ be the parity-check matrix of $C$. Let $\nu(j)$ be the number of codewords of weight $j$ in $C$.

(i) Show that for any two columns $h_{1}$ and $h_{2}$ of $H$ there exists a unique third column $h_{3}$ such that $h_{3}=h_{2}+h_{1}$. Deduce that $\nu(3)=n(n-1) / 6$.

(ii) Show that $C$ contains a codeword of weight $n$.

(iii) Find formulae for $\nu(n-1), \nu(n-2)$ and $\nu(n-3)$. Justify your answer in each case.

comment
• # Paper 3, Section I, I

Let $N$ and $p$ be very large positive integers with $p$ a prime and $p>N$. The Chair of the Committee is able to inscribe pairs of very large integers on discs. The Chair wishes to inscribe a collection of discs in such a way that any Committee member who acquires $r$ of the discs and knows the prime $p$ can deduce the integer $N$, but owning $r-1$ discs will give no information whatsoever. What strategy should the Chair follow?

[You may use without proof standard properties of the determinant of the $r \times r$ Vandermonde matrix.]

comment
• # Paper 4, Section I, I

(a) What does it mean to say that a cipher has perfect secrecy? Show that if a cipher has perfect secrecy then there must be at least as many possible keys as there are possible plaintext messages. What is a one-time pad? Show that a one-time pad has perfect secrecy.

(b) I encrypt a binary sequence $a_{1}, a_{2}, \ldots, a_{N}$ using a one-time pad with key sequence $k_{1}, k_{2}, k_{3}, \ldots .$ I transmit $a_{1}+k_{1}, a_{2}+k_{2}, \ldots, a_{N}+k_{N}$ to you. Then, by mistake, I also transmit $a_{1}+k_{2}, a_{2}+k_{3}, \ldots, a_{N}+k_{N+1}$ to you. Assuming that you know I have made this error, and that my message makes sense, how would you go about finding my message? Can you now decipher other messages sent using the same part of the key sequence? Briefly justify your answer.

comment

• # Paper 1, Section I, G

Let $X$ and $Y$ be discrete random variables taking finitely many values. Define the conditional entropy $H(X \mid Y)$. Suppose $Z$ is another discrete random variable taking values in a finite alphabet, and prove that

$H(X \mid Y) \leqslant H(X \mid Y, Z)+H(Z)$

[You may use the equality $H(X, Y)=H(X \mid Y)+H(Y)$ and the inequality $H(X \mid Y) \leqslant$ $H(X) .]$

State and prove Fano's inequality.

comment
• # Paper 1, Section II, G

What does it mean to say that $C$ is a binary linear code of length $n$, rank $k$ and minimum distance $d$ ? Let $C$ be such a code.

(a) Prove that $n \geqslant d+k-1$.

Let $x=\left(x_{1}, \ldots, x_{n}\right) \in C$ be a codeword with exactly $d$ non-zero digits.

(b) Prove that puncturing $C$ on the non-zero digits of $x$ produces a code $C^{\prime}$ of length $n-d, \operatorname{rank} k-1$ and minimum distance $d^{\prime}$ for some $d^{\prime} \geqslant\left\lceil\frac{d}{2}\right\rceil$.

(c) Deduce that $n \geqslant d+\sum_{1 \leqslant l \leqslant k-1}\left\lceil\frac{d}{2^{t}}\right\rceil$.

comment
• # Paper 2, Section I, G

Define the binary Hamming code of length $n=2^{l}-1$ for $l \geqslant 3$. Define a perfect code. Show that a binary Hamming code is perfect.

What is the weight of the dual code of a binary Hamming code when $l=3 ?$

comment
• # Paper 2, Section II, G

Describe the Huffman coding scheme and prove that Huffman codes are optimal.

Are the following statements true or false? Justify your answers.

(i) Given $m$ messages with probabilities $p_{1} \geqslant p_{2} \geqslant \cdots \geqslant p_{m}$ a Huffman coding will assign a unique set of word lengths.

(ii) An optimal code must be Huffman.

(iii) Suppose the $m$ words of a Huffman code have word lengths $s_{1}, s_{2}, \ldots, s_{m}$. Then

$\sum_{i=1}^{m} 2^{-s_{i}}=1$

[Throughout this question you may assume that a decipherable code with prescribed word lengths exists if and only if there is a prefix-free code with the same word lengths.]

comment
• # Paper 3, Section I, G

What does it mean to transmit reliably at rate $R$ through a binary symmetric channel $(\mathrm{BSC})$ with error probability $p$ ?

Assuming Shannon's second coding theorem (also known as Shannon's noisy coding theorem), compute the supremum of all possible reliable transmission rates of a BSC. Describe qualitatively the behaviour of the capacity as $p$ varies. Your answer should address the following cases,

(i) $p$ is small,

(ii) $p=1 / 2$,

(iii) $p>1 / 2$.

comment
• # Paper 4, Section I, $3 G$

(a) Describe Diffie-Hellman key exchange. Why is it believed to be a secure system?

(b) Consider the following authentication procedure. Alice chooses public key $N$ for the Rabin-Williams cryptosystem. To be sure we are in communication with Alice we send her a 'random item' $r \equiv m^{2} \bmod N$. On receiving $r$, Alice proceeds to decode using her knowledge of the factorisation of $N$ and finds a square root $m_{1}$ of $r$. She returns $m_{1}$ to us and we check $r \equiv m_{1}^{2} \bmod N$. Is this authentication procedure secure? Justify your answer.

comment

• # Paper 1, Section I, H

State and prove Shannon's noiseless coding theorem. [You may use Gibbs' and Kraft's inequalities as long as they are clearly stated.]

comment
• # Paper 1, Section II, H

Define the bar product $C_{1} \mid C_{2}$ of binary linear codes $C_{1}$ and $C_{2}$, where $C_{2}$ is a subcode of $C_{1}$. Relate the rank and minimum distance of $C_{1} \mid C_{2}$ to those of $C_{1}$ and $C_{2}$ and justify your answer.

What is a parity check matrix for a linear code? If $C_{1}$ has parity check matrix $P_{1}$ and $C_{2}$ has parity check matrix $P_{2}$, find a parity check matrix for $C_{1} \mid C_{2}$.

Using the bar product construction, or otherwise, define the Reed-Muller code $R M(d, r)$ for $0 \leqslant r \leqslant d$. Compute the rank of $R M(d, r)$. Show that all but two codewords in $R M(d, 1)$ have the same weight. Given $d$, for which $r$ is it true that all elements of $R M(d, r)$ have even weight? Justify your answer.

comment
• # Paper 2, Section I, H

What is the channel matrix of a binary symmetric channel with error probability $p$ ?

State the maximum likelihood decoding rule and the minimum distance decoding rule. Prove that if $p<1 / 2$, then they agree.

Let $C$ be the repetition code $\{000,111\}$. Suppose a codeword from $C$ is sent through a binary symmetric channel with error probability $p$. Show that, if the minimum distance decoding rule is used, then the probability of error is $3 p^{2}-2 p^{3}$.

comment
• # Paper 2, Section II, H

Describe the RSA encryption scheme with public key $(N, e)$ and private key $d$.

Suppose $N=p q$ with $p$ and $q$ distinct odd primes and $1 \leqslant x \leqslant N$ with $x$ and $N$ coprime. Denote the order of $x$ in $\mathbb{F}_{p}^{*}$ by $\mathrm{O}_{p}(x)$. Further suppose $\Phi(N)$ divides $2^{a} b$ where $b$ is odd. If $\mathrm{O}_{p}\left(x^{b}\right) \neq \mathrm{O}_{q}\left(x^{b}\right)$ prove that there exists $0 \leqslant t such that the greatest common divisor of $x^{2^{t} b}-1$ and $N$ is a nontrivial factor of $N$. Further, prove that the number of $x$ satisfying $\mathrm{O}_{p}\left(x^{b}\right) \neq \mathrm{O}_{q}\left(x^{b}\right)$ is $\geqslant \Phi(N) / 2$.

Hence, or otherwise, prove that finding the private key $d$ from the public key $(N, e)$ is essentially as difficult as factoring $N$.

Suppose a message $m$ is sent using the $\mathrm{RSA}$ scheme with $e=43$ and $N=77$, and $c=5$ is the received text. What is $m$ ?

An integer $m$ satisfying $1 \leqslant m \leqslant N-1$ is called a fixed point if it is encrypted to itself. Prove that if $m$ is a fixed point then so is $N-m$.

comment
• # Paper 3, Section $I$, H

Compute the rank and minimum distance of the cyclic code with generator polynomial $g(X)=X^{3}+X^{2}+1$ and parity check polynomial $h(X)=X^{4}+X^{3}+X^{2}+1$. Now let $\alpha$ be a root of $g(X)$ in the field with 8 elements. We receive the word $r(X)=X^{2}+X+1$ $\left(\bmod X^{7}-1\right)$. Verify that $r(\alpha)=\alpha^{4}$, and hence decode $r(X)$ using minimum-distance decoding.

comment
• # Paper 4, Section I, H

What is a linear feedback shift register? Explain the Berlekamp-Massey method for recovering a feedback polynomial of a linear feedback shift register from its output. Illustrate the method in the case when we observe output

$010111100010 \ldots$

comment

• # Paper 1, Section I, G

Let $C$ be a binary code of length $n$. Define the following decoding rules: (i) ideal observer, (ii) maximum likelihood, (iii) minimum distance.

Let $p$ denote the probability that a digit is mistransmitted and suppose $p<1 / 2$. Prove that maximum likelihood and minimum distance decoding agree.

Suppose codewords 000 and 111 are sent with probabilities $4 / 5$ and $1 / 5$ respectively with error probability $p=1 / 4$. If we receive 110 , how should it be decoded according to the three decoding rules above?

comment
• # Paper 1, Section II, G

Let $C$ be a binary linear code. Explain what it means for $C$ to have length $n$ and $\operatorname{rank} k$. Explain what it means for a codeword of $C$ to have weight $j$.

Suppose $C$ has length $n$, rank $k$, and $A_{j}$ codewords of weight $j$. The weight enumerator polynomial of $C$ is given by

$W_{C}(s, t)=\sum_{j=0}^{n} A_{j} s^{j} t^{n-j}$

What is $W_{C}(1,1) ?$ Prove that $W_{C}(s, t)=W_{C}(t, s)$ if and only if $W_{C}(1,0)=1$.

Define the dual code $C^{\perp}$ of $C$.

(i) Let $\mathbf{y} \in \mathbb{F}_{2}^{n}$. Show that

$\sum_{\mathbf{x} \in C}(-1)^{\mathbf{x} \cdot \mathbf{y}}= \begin{cases}2^{k}, & \text { if } \mathbf{y} \in C^{\perp} \\ 0, & \text { otherwise }\end{cases}$

(ii) Extend the definition of weight to give a weight $w(\mathbf{y})$ for $\mathbf{y} \in \mathbb{F}_{2}^{n}$. Suppose that for $t$ real and all $\mathbf{x} \in C$

$\sum_{\mathbf{y} \in \mathbb{F}_{2}^{n}} t^{w(\mathbf{y})}(-1)^{\mathbf{x} \cdot \mathbf{y}}=(1-t)^{w(\mathbf{x})}(1+t)^{n-w(\mathbf{x})}$

For $s$ real, by evaluating

$\sum_{\mathbf{x} \in C}\left(\sum_{\mathbf{y} \in \mathbb{F}_{2}^{n}}(-1)^{\mathbf{x} \cdot \mathbf{y}}\left(\frac{s}{t}\right)^{w(\mathbf{y})}\right)$

in two different ways, show that

$W_{C^{\perp}}(s, t)=2^{-k} W_{C}(t-s, t+s) .$

comment
• # Paper 2, Section I, G

Prove that a decipherable code with prescribed word lengths exists if and only if there is a prefix-free code with the same word lengths.

comment
• # Paper 2, Section II, G

Define the entropy, $H(X)$, of a random variable $X$. State and prove Gibbs' inequality.

Hence, or otherwise, show that $H\left(p_{1}, p_{2}, p_{3}\right) \leqslant H\left(p_{1}, 1-p_{1}\right)+\left(1-p_{1}\right)$ and determine when equality occurs.

Show that the Discrete Memoryless Channel with channel matrix

$\left(\begin{array}{ccc} 1-\alpha-\beta & \alpha & \beta \\ \alpha & 1-\alpha-\beta & \beta \end{array}\right)$

has capacity $C=(1-\beta)(1-\log (1-\beta))+(1-\alpha-\beta) \log (1-\alpha-\beta)+\alpha \log \alpha$.

comment
• # Paper 3, Section I, G

Find and describe all binary cyclic codes of length 7 . Pair each code with its dual code. Justify your answer.

comment
• # Paper 4, Section I, G

Describe the RSA system with public key $(N, e)$ and private key $d$.

Give a simple example of how the system is vulnerable to a homomorphism attack.

Describe the El-Gamal signature scheme and explain how this can defeat a homomorphism attack.

comment

• # Paper 1, Section I, G

Find the average length of an optimum decipherable binary code for a source that emits five words with probabilities

$0.25,0.15,0.15,0.2,0.25$

Show that, if a source emits $N$ words (with $N \geqslant 2$ ), and if $l_{1}, \ldots, l_{N}$ are the lengths of the codewords in an optimum encoding over the binary alphabet, then

$l_{1}+\cdots+l_{N} \leqslant \frac{1}{2}\left(N^{2}+N-2\right) .$

[You may assume that an optimum encoding can be given by a Huffman encoding.]

comment
• # Paper 1, Section II, G

What does it mean to say a binary code $C$ has length $n$, size $m$ and minimum distance d?

Let $A(n, d)$ be the largest value of $m$ for which there exists an $[n, m, d]$-code. Prove that

$\frac{2^{n}}{V(n, d-1)} \leqslant A(n, d) \leqslant \frac{2^{n}}{V(n,\lfloor(d-1) / 2\rfloor)}$

where

$V(n, r)=\sum_{j=0}^{r}\left(\begin{array}{l} n \\ j \end{array}\right)$

Another bound for $A(n, d)$ is the Singleton bound given by

$A(n, d) \leqslant 2^{n-d+1}$

Prove the Singleton bound and give an example of a linear code of length $n>1$ that satisfies $A(n, d)=2^{n-d+1}$.

comment
• # Paper 2, Section I, G

Show that the binary channel with channel matrix

$\left(\begin{array}{ll} 1 & 0 \\ \frac{1}{2} & \frac{1}{2} \end{array}\right)$

has capacity $\log 5-2$.

comment
• # Paper 2, Section II, G

Define a $B C H$ code of length $n$, where $n$ is odd, over the field of 2 elements with design distance $\delta$. Show that the minimum weight of such a code is at least $\delta$. [Results about the Vandermonde determinant may be quoted without proof, provided they are stated clearly.]

Let $\omega \in \mathbb{F}_{16}$ be a root of $X^{4}+X+1$. Let $C$ be the $\mathrm{BCH}$ code of length 15 with defining set $\left\{\omega, \omega^{2}, \omega^{3}, \omega^{4}\right\}$. Find the generator polynomial of $C$ and the rank of $C$. Determine the error positions of the following received words:

(i) $r(X)=1+X^{6}+X^{7}+X^{8}$,

(ii) $r(X)=1+X+X^{4}+X^{5}+X^{6}+X^{9}$.

comment
• # Paper 3, Section I, $3 G$

Describe in words the unicity distance of a cryptosystem.

Denote the cryptosystem by $\langle M, K, C\rangle$, in the usual way, and let $m \in M$ and $k \in K$ be random variables and $c=e(m, k)$. The unicity distance $U$ is formally defined to be the least $n>0$ such that $H\left(k \mid c^{(n)}\right)=0$. Derive the formula

$U=\frac{\log |K|}{\log |\Sigma|-H}$

where $H=H(m)$, and $\Sigma$ is the alphabet of the ciphertext. Make clear any assumptions you make.

The redundancy of a language is given by $R=1-\frac{H}{\log |\Sigma|}$. If a language has zero redundancy what is the unicity of any cryptosystem?

comment
• # Paper 4, Section I, G

Describe the Rabin-Williams scheme for coding a message $x$ as $x^{2}$ modulo a certain $N$. Show that, if $N$ is chosen appropriately, breaking this code is equivalent to factorising the product of two primes.

comment

• # Paper 1, Section I, $3 G$

Let $\mathcal{A}$ be a finite alphabet. Explain what is meant by saying that a binary code $c: \mathcal{A} \rightarrow\{0,1\}^{*}$ has minimum distance $\delta$. If $c$ is such a binary code with minimum distance $\delta$, show that $c$ is $\delta-1$ error-detecting and $\left\lfloor\frac{1}{2}(\delta-1)\right\rfloor$ error-correcting.

Show that it is possible to construct a code that has minimum distance $\delta$ for any integer $\delta>0$.

comment
• # Paper 1, Section II, G

Define the Hamming code. Show that it is a perfect, linear, 1-error correcting code.

I wish to send a message through a noisy channel to a friend. The message consists of a large number $N=1,000$ of letters from a 16 -letter alphabet $\mathcal{A}$. When my friend has decoded the message, she can tell whether there have been any errors. If there have, she asks me to send the message again and this is repeated until she has received the message without error. For each individual binary digit that is transmitted, there is independently a small probability $p=0.001$ of an error.

(a) Suppose that I encode my message by writing each letter as a 4-bit binary string. The whole message is then $4 N$ bits long. What is the probability $P$ that the entire message is transmitted without error? How many times should I expect to transmit the message until my friend receives it without error?

(b) As an alternative, I use the Hamming code to encode each letter of $\mathcal{A}$ as a 7-bit binary string. What is the probability that my friend can decode a single 7-bit string correctly? Deduce that the probability $Q$ that the entire message is correctly decoded is given approximately by

$Q \simeq\left(1-21 p^{2}\right)^{N} \simeq \exp \left(-21 N p^{2}\right)$

Which coding method is better?

comment
• # Paper 2, Section I, G

A random variable $A$ takes values in the alphabet $\mathcal{A}=\{a, b, c, d, e\}$ with probabilities $0.4,0.2,0.2,0.1$ and $0.1$. Calculate the entropy of $A$.

Define what it means for a code for a general finite alphabet to be optimal. Find such a code for the distribution above and show that there are optimal codes for this distribution with differing lengths of codeword.

[You may use any results from the course without proof. Note that $\log _{2} 5 \simeq 2.32$.]

comment
• # Paper 2, Section II, G

Briefly describe the $R S A$ public key cipher.

Just before it went into liquidation, the Internet Bank decided that it wanted to communicate with each of its customers using an RSA cipher. So, it chose a large modulus $N$, which is the product of two large prime numbers, and chose encrypting exponents $e_{j}$ and decrypting exponents $d_{j}$ for each customer $j$. The bank published $N$ and $e_{j}$ and sent the decrypting exponent $d_{j}$ secretly to customer $j$. Show explicitly that the cipher can be broken by each customer.

The bank sent out the same message to each customer. I am not a customer of the bank but have two friends who are and I notice that their published encrypting exponents are coprime. Explain how I can find the original message. Can I break the cipher?

comment
• # Paper 3, Section I, G

Let $A$ be a random variable that takes each value $a$ in the finite alphabet $\mathcal{A}$ with probability $p(a)$. Show that, if each $l(a)$ is an integer greater than 0 and $\sum 2^{-l(a)} \leqslant 1$, then there is a decodable binary code $c: \mathcal{A} \rightarrow\{0,1\}^{*}$ with each codeword $c(a)$ having length $l(a)$.

Prove that, for any decodable code $c: \mathcal{A} \rightarrow\{0,1\}^{*}$, we have

$H(A) \leqslant \mathbb{E} l(A)$

where $H(A)$ is the entropy of the random variable $A$. When is there equality in this inequality?

comment
• # Paper 4, Section I, G

Explain how to construct binary Reed-Muller codes. State and prove a result giving the minimum distance for each such Reed-Muller code.

comment

• # Paper 1, Section I, $4 \mathrm{I}$

State and prove Gibbs' inequality.

Show that, for a pair of discrete random variables $X$ and $Y$, each taking finitely many values, the joint entropy $H(X, Y)$ satisfies

$H(X, Y) \leqslant H(X)+H(Y)$

with equality precisely when $X$ and $Y$ are independent.

comment
• # Paper 1, Section II, I

Describe, briefly, either the RSA or the Elgamal public key cipher. You should explain, without proof, why it is believed to be difficult to break the cipher you describe.

How can such a cipher be used to sign messages? You should explain how the intended recipient of the message can (a) know from whom it came; (b) know that the message has not been changed; and (c) demonstrate that the sender must have signed it.

Let $I_{0}, I_{1}, \ldots, I_{N}$ be friendly individuals each of whom has a public key cipher. $I_{0}$ wishes to send a message to $I_{N}$ by passing it first to $I_{1}$, then $I_{1}$ passes it to $I_{2}, I_{2}$ to $I_{3}$, until finally it is received by $I_{N}$. At each stage the message can be modified to show from whom it was received and to whom it is sent. Devise a way in which these modifications can be made so that $I_{N}$ can be confident both of the content of the original message and that the message has been passed through the intermediaries $I_{1}, I_{2}, \ldots, I_{N-1}$ in that order and has not been modified by an enemy agent. Assume that it takes a negligible time to transmit a message from $I_{k}$ to $I_{k+1}$ for each $k$, but the time needed to modify a message is not negligible.

comment
• # Paper 2, Section $I$, I

Let $c: \mathcal{A} \rightarrow\{0,1\}^{*}$ be a decodable binary code defined on a finite alphabet $\mathcal{A}$. Let $l(a)$ be the length of the code word $c(a)$. Prove that

$\sum_{a \in \mathcal{A}} 2^{-l(a)} \leqslant 1$

Show that, for the decodable code $c: \mathcal{A} \rightarrow\{0,1\}^{*}$ described above, there is a prefixfree code $p: \mathcal{A} \rightarrow\{0,1\}^{*}$ with each code word $p(a)$ having length $l(a)$. [You may use, without proof, any standard results from the course.]

comment
• # Paper 2, Section II, I

What is the information capacity of a memoryless, time-independent channel? Compute the information capacity of a binary symmetric channel with probability $p$ of error. Show the steps in your computation.

Binary digits are transmitted through a noisy channel, which is memoryless and time-independent. With probability $\alpha(0<\alpha<1)$ the digit is corrupted and noise is received, otherwise the digit is transmitted unchanged. So, if we denote the input by 0 and 1 and the output as $0, *$ and 1 with $*$ denoting the noise, the transition matrix is

$\left(\begin{array}{cc} 1-\alpha & 0 \\ \alpha & \alpha \\ 0 & 1-\alpha \end{array}\right)$

Compute the information capacity of this channel.

Explain how to code a message for transmission through the channel described above, and how to decode it, so that the probability of error for each bit is arbitrarily small.

comment
• # Paper 3, Section I, I

Let $A$ be a random variable that takes values in the finite alphabet $\mathcal{A}$. Prove that there is a decodable binary code $c: \mathcal{A} \rightarrow\{0,1\}^{*}$ that satisfies

$H(A) \leqslant \mathbb{E}(l(A)) \leqslant H(A)+1$

where $l(a)$ is the length of the code word $c(a)$ and $H(A)$ is the entropy of $A$.

Is it always possible to find such a code with $\mathbb{E}(l(A))=H(A) ?$ Justify your answer.

comment
• # Paper 4, Section I, I

Explain what is meant by a Bose-Ray Chaudhuri-Hocquenghem (BCH) code with design distance $\delta$. Prove that, for such a code, the minimum distance between code words is at least $\delta$. How many errors will the code detect? How many errors will it correct?

comment

• # Paper 1, Section I, H

A binary Huffman code is used for encoding symbols $1, \ldots, m$ occurring with respective probabilities $p_{1} \geqslant \cdots \geqslant p_{m}>0$ where $\sum_{1 \leqslant j \leqslant m} p_{j}=1$. Let $s_{1}$ be the length of a shortest codeword and $s_{m}$ the length of a longest codeword. Determine the maximal and minimal values of each of $s_{1}$ and $s_{m}$, and find binary trees for which they are attained.

comment
• # Paper 1, Section II, H

Define the bar product $C_{1} \mid C_{2}$ of binary linear codes $C_{1}$ and $C_{2}$, where $C_{2}$ is a subcode of $C_{1}$. Relate the rank and minimum distance of $C_{1} \mid C_{2}$ to those of $C_{1}$ and $C_{2}$ and justify your answer. Show that if $C^{\perp}$ denotes the dual code of $C$, then

$\left(C_{1} \mid C_{2}\right)^{\perp}=C_{2}^{\perp} \mid C_{1}^{\perp}$

Using the bar product construction, or otherwise, define the Reed-Muller code $\mathrm{RM}(d, r)$ for $0 \leqslant r \leqslant d$. Show that if $0 \leqslant r \leqslant d-1$, then the dual of $\mathrm{RM}(d, r)$ is again a Reed-Muller code.

comment
• # Paper 2, Section I, H

Let $A(n, d)$ denote the maximum size of a binary code of length $n$ with minimum distance $d$. For fixed $\delta$ with $0<\delta<1 / 2$, let $\alpha(\delta)=\limsup _{n} \frac{1}{n} \log _{2} A(n, n \delta)$. Show that

$1-H(\delta) \leqslant \alpha(\delta) \leqslant 1-H(\delta / 2)$

where $H(p)=-p \log _{2} p-(1-p) \log _{2}(1-p)$.

[You may assume the GSV and Hamming bounds and any form of Stirling's theorem provided you state them clearly.]

comment
• # Paper 2, Section II, H

Define a BCH code of length $n$, where $n$ is odd, over the field of 2 elements with design distance $\delta$. Show that the minimum weight of such a code is at least $\delta$. [Results about the van der Monde determinant may be quoted without proof, provided they are stated clearly.]

Consider a BCH code of length 31 over the field of 2 elements with design distance 8 . Show that the minimum distance is at least 11. [Hint: Let $\alpha$ be a primitive element in the field of $2^{5}$ elements, and consider the minimal polynomial for certain powers of $\left.\alpha .\right]$

comment
• # Paper 3, Section I, H

Describe briefly the Rabin cipher with modulus $N$, explaining how it can be deciphered by the intended recipient and why it is difficult for an eavesdropper to decipher it.

The Cabinet decides to communicate using Rabin ciphers to maintain confidentiality. The Cabinet Secretary encrypts a message, represented as a positive integer $m$, using the Rabin cipher with modulus $N$ (with $0 ) and publishes both the encrypted message and the modulus. The Defence Secretary deciphers this message to read it but then foolishly encrypts it again using a Rabin cipher with a different modulus $N^{\prime}$ (with $\left.m and publishes the newly encrypted message and $N^{\prime}$. Mr Rime (the Leader of the Opposition) knows this has happened. Explain how Rime can work out what the original message was using the two different encrypted versions.

Can Rime decipher other messages sent out by the Cabinet using the original modulus $N$ ?

comment
• # Paper 4, Section I, $4 \mathrm{H}$

Describe how a stream cipher works. What is a one-time pad?

A one-time pad is used to send the message $x_{1} x_{2} x_{3} x_{4} x_{5} x_{6} y_{7}$ which is encoded as 0101011. In error, it is reused to send the message $y_{0} x_{1} x_{2} x_{3} x_{4} x_{5} x_{6}$ which is encoded as 0100010 . Show that there are two possibilities for the substring $x_{1} x_{2} x_{3} x_{4} x_{5} x_{6}$, and find them.

comment

• # Paper 1, Section I, G

Let $\mathcal{A}$ and $\mathcal{B}$ be alphabets of sizes $m$ and a respectively. What does it mean to say that $c: \mathcal{A} \rightarrow \mathcal{B}^{*}$ is a decodable code? State Kraft's inequality.

Suppose that a source emits letters from the alphabet $\mathcal{A}=\{1,2, \ldots, m\}$, each letter $j$ occurring with (known) probability $p_{j}>0$. Let $S$ be the codeword-length random variable for a decodable code $c: \mathcal{A} \rightarrow \mathcal{B}^{*}$, where $|\mathcal{B}|=a$. It is desired to find a decodable code that minimizes the expected value of $a^{S}$. Establish the lower bound $\mathbb{E}\left(a^{S}\right) \geqslant\left(\sum_{j=1}^{m} \sqrt{p_{j}}\right)^{2}$, and characterise when equality occurs. [Hint. You may use without proof the Cauchy-Schwarz inequality, that (for positive $x_{i}, y_{i}$ )

$\sum_{i=1}^{m} x_{i} y_{i} \leqslant\left(\sum_{i=1}^{m} x_{i}^{2}\right)^{1 / 2}\left(\sum_{i=1}^{m} y_{i}^{2}\right)^{1 / 2}$

with equality if and only if $x_{i}=\lambda y_{i}$ for all i.]

comment
• # Paper 1, Section II, 12G

Define a cyclic binary code of length $n$.

Show how codewords can be identified with polynomials in such a way that cyclic binary codes correspond to ideals in the polynomial ring with a suitably chosen multiplication rule.

Prove that any cyclic binary code $C$ has a unique generator, that is, a polynomial $c(X)$ of minimum degree, such that the code consists of the multiples of this polynomial. Prove that the rank of the code equals $n-\operatorname{deg} c(X)$, and show that $c(X)$ divides $X^{n}-1$.

Show that the repetition and parity check codes are cyclic, and determine their generators.

comment
• # Paper 2, Section I, G

What is a (binary) linear code? What does it mean to say that a linear code has length $n$ and minimum weight $d$ ? When is a linear code perfect? Show that, if $n=2^{r}-1$, there exists a perfect linear code of length $n$ and minimum weight 3 .

comment
• # Paper 2, Section II, G

What does it mean to say that $f: \mathbb{F}_{2}^{d} \rightarrow \mathbb{F}_{2}^{d}$ is a linear feedback shift register? Let $\left(x_{n}\right)_{n \geqslant 0}$ be a stream produced by such a register. Show that there exist $N, M$ with $N+M \leqslant 2^{d}-1$ such that $x_{r+N}=x_{r}$ for all $r \geqslant M$.

Describe and justify the Berlekamp-Massey method for 'breaking' a cipher stream arising from a linear feedback register of unknown length.

Let $x_{n}, y_{n}, z_{n}$ be three streams produced by linear feedback registers. Set

\begin{aligned} &k_{n}=x_{n} \quad \text { if } y_{n}=z_{n} \\ &k_{n}=y_{n} \quad \text { if } y_{n} \neq z_{n} \end{aligned}

Show that $k_{n}$ is also a stream produced by a linear feedback register. Sketch proofs of any theorems you use.

comment
• # Paper 3, Section $I$, G

Describe the RSA system with public key $(N, e)$ and private key $d$. Give a simple example of how the system is vulnerable to a homomorphism attack. Explain how a signature system prevents such an attack.

comment
• # Paper 4, Section I, G

Describe the BB84 protocol for quantum key exchange.

Suppose we attempt to implement the BB84 protocol but cannot send single photons. Instead we send $K$ photons at a time all with the same polarization. An enemy can separate one of these photons from the other $K-1$. Explain briefly how the enemy can intercept the key exchange without our knowledge.

Show that an enemy can find our common key if $K=3$. Can she do so when $K=2$ (with suitable equipment)?

comment

• # Paper 1, Section I, G

I think of an integer $n$ with $1 \leqslant n \leqslant 10^{6}$. Explain how to find $n$ using twenty questions (or less) of the form 'Is it true that $n \geqslant m$ ?' to which I answer yes or no.

I have watched a horse race with 15 horses. Is it possible to discover the order in which the horses finished by asking me twenty questions to which I answer yes or no?

Roughly how many questions of the yes/no type are required to discover the order in which $n$ horses finished if $n$ is large?

[You may assume that I answer honestly.]

comment
• # Paper 1, Section II, $12 G$

Describe the Rabin-Williams coding scheme. Show that any method for breaking it will enable us to factorise the product of two primes.

Explain how the Rabin-Williams scheme can be used for bit sharing (that is to say 'tossing coins by phone').

comment
• # Paper 2, Section I, G

I happen to know that an apparently fair coin actually has probability $p$ of heads with $1>p>1 / 2$. I play a very long sequence of games of heads and tails in which my opponent pays me back twice my stake if the coin comes down heads and takes my stake if the coin comes down tails. I decide to bet a proportion $\alpha$ of my fortune at the end of the $n$th game in the $(n+1)$ st game. Determine, giving justification, the value $\alpha_{0}$ maximizing the expected logarithm of my fortune in the long term, assuming I use the same $\alpha_{0}$ at each game. Can it be actually disadvantageous for me to choose an $\alpha<\alpha_{0}$ (in the sense that I would be better off not playing)? Can it be actually disadvantageous for me to choose an $\alpha>\alpha_{0}$ ?

[Moral issues should be ignored.]

comment
• # Paper 2, Section II, G

Define a cyclic code. Show that there is a bijection between the cyclic codes of length $n$ and the factors of $X^{n}-1$ over the field $\mathbb{F}_{2}$ of order 2 .

What is meant by saying that $\alpha$ is a primitive $n$th root of unity in a finite field extension $K$ of $\mathbb{F}_{2}$ ? What is meant by saying that $C$ is a BCH code of length $n$ with defining set $\left\{\alpha, \alpha^{2}, \ldots, \alpha^{\delta-1}\right\}$ ? Show that such a code has minimum distance at least $\delta$.

Suppose that $K$ is a finite field extension of $\mathbb{F}_{2}$ in which $X^{7}-1$ factorises into linear factors. Show that if $\beta$ is a root of $X^{3}+X^{2}+1$ then $\beta$ is a primitive 7 th root of unity and $\beta^{2}$ is also a root of $X^{3}+X^{2}+1$. Quoting any further results that you need show that the $\mathrm{BCH}$ code of length 7 with defining set $\left\{\beta, \beta^{2}\right\}$ is the Hamming code.

[Results on the Vandermonde determinant may be used without proof provided they are quoted correctly.]

comment
• # Paper 3, Section I, G

What is the rank of a binary linear code $C ?$ What is the weight enumeration polynomial $W_{C}$ of $C ?$

Show that $W_{C}(1,1)=2^{r}$ where $r$ is the rank of $C$. Show that $W_{C}(s, t)=W_{C}(t, s)$ for all $s$ and $t$ if and only if $W_{C}(1,0)=1$.

Find, with reasons, the weight enumeration polynomial of the repetition code of length $n$, and of the simple parity check code of length $n$.

comment
• # Paper 4, Section I, G

Describe a scheme for sending messages based on quantum theory which is not vulnerable to eavesdropping. You may ignore engineering problems.

comment

• # Paper 1, Section I, H

Explain what is meant by saying that a binary code $\mathcal{C}$ is a decodable code with words $C_{j}$ of length $l_{j}$ for $1 \leqslant j \leqslant n$. Prove the MacMillan inequality which states that, for such a code,

$\sum_{j=1}^{n} 2^{-l_{j}} \leqslant 1$

comment
• # Paper 1, Section II, H

State and prove Shannon's theorem for the capacity of a noisy memoryless binary symmetric channel, defining the terms you use.

[You may make use of any form of Stirling's formula and any standard theorems from probability, provided that you state them exactly.]

comment
• # Paper 2, Section I, $4 \mathrm{H}$

Describe the standard Hamming code of length 7 , proving that it corrects a single error. Find its weight enumeration polynomial.

comment
• # Paper 2, Section II, H

The Van der Monde matrix $V\left(x_{0}, x_{1}, \ldots, x_{r-1}\right)$ is the $r \times r$ matrix with $(i, j)$ th entry $x_{i-1}^{j-1}$. Find an expression for $\operatorname{det} V\left(x_{0}, x_{1}, \ldots, x_{r-1}\right)$ as a product. Explain why this expression holds if we work modulo $p$ a prime.

Show that $\operatorname{det} V\left(x_{0}, x_{1}, \ldots, x_{r-1}\right) \equiv 0$ modulo $p$ if $r>p$, and that there exist $x_{0}, \ldots, x_{p-1}$ such that $\operatorname{det} V\left(x_{0}, x_{1}, \ldots, x_{p-1}\right) \not \equiv 0$. By using Wilson's theorem, or otherwise, find the possible values of $\operatorname{det} V\left(x_{0}, x_{1}, \ldots, x_{p-1}\right)$ modulo $p$.

The Dark Lord Y'Trinti has acquired the services of the dwarf Trigon who can engrave pairs of very large integers on very small rings. The Dark Lord wishes Trigon to engrave $n$ rings in such a way that anyone who acquires $r$ of the rings and knows the Prime Perilous $p$ can deduce the Integer $N$ of Power, but owning $r-1$ rings will give no information whatsoever. The integers $N$ and $p$ are very large and $p>N$. Advise the Dark Lord.

For reasons to be explained in the prequel, Trigon engraves an $(n+1)$ st ring with random integers. A band of heroes (who know the Prime Perilous and all the information contained in this question) set out to recover the rings. What, if anything, can they say, with very high probability, about the Integer of Power if they have $r$ rings (possibly including the fake)? What can they say if they have $r+1$ rings? What if they have $r+2$ rings?

comment
• # Paper 3, Section I, $4 \mathrm{H}$

What is a linear code? What is a parity check matrix for a linear code? What is the minimum distance $d(C)$ for a linear code $C ?$

If $C_{1}$ and $C_{2}$ are linear codes having a certain relation (which you should specify), define the bar product $C_{1} \mid C_{2}$. Show that

$d\left(C_{1} \mid C_{2}\right)=\min \left\{2 d\left(C_{1}\right), d\left(C_{2}\right)\right\}$

If $C_{1}$ has parity check matrix $P_{1}$ and $C_{2}$ has parity check matrix $P_{2}$, find a parity check matrix for $C_{1} \mid C_{2}$.

comment
• # Paper 4, Section I, H

What is the discrete logarithm problem?

Describe the Diffie-Hellman key exchange system for two people. What is the connection with the discrete logarithm problem? Why might one use this scheme rather than just a public key system or a classical (pre-1960) coding system?

Extend the Diffie-Hellman system to $n$ people using $n(n-1)$ transmitted numbers.

comment

• # Paper 1, Section I, H

I am putting up my Christmas lights. If I plug in a set of bulbs and one is defective, none will light up. A badly written note left over from the previous year tells me that exactly one of my 10 bulbs is defective and that the probability that the $k$ th bulb is defective is $k / 55$.

(i) Find an explicit procedure for identifying the defective bulb in the least expected number of steps.

[You should explain your method but no proof is required.]

(ii) Is there a different procedure from the one you gave in (i) with the same expected number of steps? Either write down another procedure and explain briefly why it gives the same expected number or explain briefly why no such procedure exists.

(iii) Because I make such a fuss about each test, my wife wishes me to tell her the maximum number $N$ of trials that might be required. Will the procedure in (i) give the minimum $N$ ? Either write down another procedure and explain briefly why it gives a smaller $N$ or explain briefly why no such procedure exists.

comment
• # Paper 1, Section II, H

(i) State and prove Gibbs' inequality.

(ii) A casino offers me the following game: I choose strictly positive numbers $a_{1}, \ldots, a_{n}$ with $\sum_{j=1}^{n} a_{j}=1$. I give the casino my entire fortune $f$ and roll an $n$-sided die. With probability $p_{j}$ the casino returns $u_{j}^{-1} a_{j} f$ for $j=1,2, \ldots, n$. If I intend to play the game many times (staking my entire fortune each time) explain carefully why I should choose $a_{1}, \ldots, a_{n}$ to maximise $\sum_{j=1}^{n} p_{j} \log \left(u_{j}^{-1} a_{j}\right)$.

[You should assume $n \geqslant 2$ and $u_{j}, p_{j}>0$ for each $j .$ ]

(iii) Determine the appropriate $a_{1}, \ldots, a_{n}$. Let $\sum_{i=1}^{n} u_{i}=U$. Show that, if $U<1$, then, in the long run with high probability, my fortune increases. Show that, if $U>1$, the casino can choose $u_{1}, \ldots, u_{n}$ in such a way that, in the long run with high probability, my fortune decreases. Is it true that, if $U>1$, any choice of $u_{1}, \ldots, u_{n}$ will ensure that, in the long run with high probability, my fortune decreases? Why?

comment
• # Paper 2, Section I, $4 \mathrm{H}$

Knowing that

$25 \equiv 2886^{2} \quad \bmod 3953$

and that 3953 is the product of two primes $p$ and $q$, find $p$ and $q$.

[You should explain your method in sufficient detail to show that it is reasonably general.]

comment
• # Paper 2, Section II, H

Describe the construction of the Reed-Miller code $R M(m, d)$. Establish its information rate and minimum weight.

Show that every codeword in $R M(d, d-1)$ has even weight. By considering $\mathbf{x} \wedge \mathbf{y}$ with $\mathbf{x} \in R M(m, r)$ and $\mathbf{y} \in R M(m, m-r-1)$, or otherwise, show that $R M(m, m-r-1) \subseteq R M(m, r)^{\perp}$. Show that, in fact, $R M(m, m-r-1)=R M(m, r)^{\perp} .$

comment
• # Paper 3, Section I, $4 \mathrm{H}$

Define a binary code of length 15 with information rate $11 / 15$ which will correct single errors. Show that it has the rate stated and give an explicit procedure for identifying the error. Show that the procedure works.

[Hint: You may wish to imitate the corresponding discussion for a code of length 7 .]

comment
• # Paper 4, Section I, H

What is a general feedback register? What is a linear feedback register? Give an example of a general feedback register which is not a linear feedback register and prove that it has the stated property.

By giving proofs or counterexamples, establish which, if any, of the following statements are true and which, if any, are false.

(i) Given two linear feedback registers, there always exist non-zero initial fills for which the outputs are identical.

(ii) If two linear feedback registers have different lengths, there do not exist non-zero initial fills for which the outputs are identical.

(iii) If two linear feedback registers have different lengths, there exist non-zero initial fills for which the outputs are not identical.

(iv) There exist two linear feedback registers of different lengths and non-zero initial fills for which the outputs are identical.

comment

• # 1.I.4G

Define the entropy $H(X)$ of a random variable $X$ that takes no more than $N$ different values. What are the maximum and the minimum values for the entropy for a fixed value of $N$ ? Explain when the maximum and minimum are attained. You should prove any inequalities that you use.

comment
• # 1.II.12G

State Shannon's Noisy Coding Theorem for a binary symmetric channel.

Define the mutual information of two discrete random variables $X$ and $Y$. Prove that the mutual information is symmetric and non-negative. Define also the information capacity of a channel.

A channel transmits numbers chosen from the alphabet $\mathcal{A}=\{0,1,2\}$ and has transition matrix

$\left(\begin{array}{ccc} 1-2 \beta & \beta & \beta \\ \beta & 1-2 \beta & \beta \\ \beta & \beta & 1-2 \beta \end{array}\right)$

for a number $\beta$ with $0 \leqslant \beta \leqslant \frac{1}{3}$. Calculate the information capacity of the channel.

comment
• # 2.I.4G

Describe briefly the Shannon-Fano and Huffman binary codes for a finite alphabet. Find examples of such codes for the alphabet $\mathcal{A}=\{a, b, c, d\}$ when the four letters are taken with probabilities $0.4,0.3,0.2$ and $0.1$ respectively.

comment
• # 2.II.12G

Describe the Rabin cipher with modulus $N$, explaining how it can be deciphered by the intended recipient and why it is difficult for an interceptor to decipher it.

The Bursars' Committee decides to communicate using Rabin ciphers to maintain confidentiality. The secretary of the committee encrypts a message, thought of as a positive integer $m$, using the Rabin cipher with modulus $N$ (with $0 ) and publishes both the encrypted message and the modulus. A foolish bursar deciphers this message to read it but then encrypts it again using a Rabin cipher with a different modulus $N^{\prime}$ (with $\left.m and publishes the newly encrypted message and $N^{\prime}$. The president of CUSU, who happens to be a talented mathematician, knows that this has happened. Explain how the president can work out what the original message was using the two different encrypted versions.

Can the president of CUSU also decipher other messages sent out by the Bursars' Committee?

comment
• # 3.I.4G

Define the Hamming code $h: \mathbb{F}_{2}^{4} \rightarrow \mathbb{F}_{2}^{7}$ and prove that the minimum distance between two distinct code words is 3. Explain how the Hamming code allows one error to be corrected.

A new code $c: \mathbb{F}_{2}^{4} \rightarrow \mathbb{F}_{2}^{8}$ is obtained by using the Hamming code for the first 7 bits and taking the last bit as a check digit on the previous 7 . Find the minimum distance between two distinct code words for this code. How many errors can this code detect? How many errors can it correct?

comment
• # 4.I.4G

What is a binary cyclic code of length $N$ ? What is the generator polynomial for such a cyclic code? Prove that the generator polynomial is a factor of $X^{N}-1$ over the field $\mathbb{F}_{2}$.

Find all the binary cyclic codes of length 5 .

comment

• # $3 . \mathrm{I} . 4 \mathrm{G} \quad$

Compute the rank and minimum distance of the cyclic code with generator polynomial $g(X)=X^{3}+X+1$ and parity-check polynomial $h(X)=X^{4}+X^{2}+X+1$. Now let $\alpha$ be a root of $g(X)$ in the field with 8 elements. We receive the word $r(X)=X^{5}+X^{3}+X \quad\left(\bmod X^{7}-1\right)$. Verify that $r(\alpha)=\alpha^{4}$, and hence decode $r(X)$ using minimum-distance decoding.

comment
• # 1.I.4G

Let $\Sigma_{1}$ and $\Sigma_{2}$ be alphabets of sizes $m$ and $a$. What does it mean to say that $f: \Sigma_{1} \rightarrow \Sigma_{2}^{*}$ is a decipherable code? State the inequalities of Kraft and Gibbs, and deduce that if letters are drawn from $\Sigma_{1}$ with probabilities $p_{1}, \ldots, p_{m}$ then the expected word length is at least $H\left(p_{1}, \ldots, p_{m}\right) / \log a$.

comment
• # 1.II.11G

Define the bar product $C_{1} \mid C_{2}$ of linear codes $C_{1}$ and $C_{2}$, where $C_{2}$ is a subcode of $C_{1}$. Relate the rank and minimum distance of $C_{1} \mid C_{2}$ to those of $C_{1}$ and $C_{2}$. Show that if $C^{\perp}$ denotes the dual code of $C$, then

$\left(C_{1} \mid C_{2}\right)^{\perp}=C_{2}^{\perp} \mid C_{1}^{\perp}$

Using the bar product construction, or otherwise, define the Reed-Muller code $R M(d, r)$ for $0 \leqslant r \leqslant d$. Show that if $0 \leqslant r \leqslant d-1$, then the dual of $R M(d, r)$ is again a Reed-Muller code.

comment
• # 2.I.4G

Briefly explain how and why a signature scheme is used. Describe the El Gamal scheme.

comment
• # 2.II.11G

Define the capacity of a discrete memoryless channel. State Shannon's second coding theorem and use it to show that the discrete memoryless channel with channel matrix

$\left(\begin{array}{ll} 1 & 0 \\ \frac{1}{2} & \frac{1}{2} \end{array}\right)$

has capacity $\log 5-2$.

comment
• # 4.I.4G

What is a linear feedback shift register? Explain the Berlekamp-Massey method for recovering the feedback polynomial of a linear feedback shift register from its output. Illustrate in the case when we observe output

$101011001000 \ldots \ldots$

comment

• # 1.I.4G

Define a linear feedback shift register. Explain the Berlekamp-Massey method for "breaking" a key stream produced by a linear feedback shift register of unknown length. Use it to find the feedback polynomial of a linear feedback shift register with output sequence

$010111100010 \ldots$

comment
• # 2.I.4G

Let $\Sigma_{1}$ and $\Sigma_{2}$ be alphabets of sizes $m$ and $a$. What does it mean to say that an $a$-ary code $f: \Sigma_{1} \rightarrow \Sigma_{2}^{*}$ is decipherable? Show that if $f$ is decipherable then the word lengths $s_{1}, \ldots, s_{m}$ satisfy

$\sum_{i=1}^{m} a^{-s_{i}} \leqslant 1$

Find a decipherable binary code consisting of codewords 011, 0111, 01111, 11111, and three further codewords of length 2. How do you know the example you have given is decipherable?

comment
• # 2.II.12G

Define a cyclic code. Show that there is a bijection between the cyclic codes of length $n$, and the factors of $X^{n}-1$ in $\mathbb{F}_{2}[X]$.

If $n$ is an odd integer then we can find a finite extension $K$ of $\mathbb{F}_{2}$ that contains a primitive $n$th root of unity $\alpha$. Show that a cyclic code of length $n$ with defining set $\left\{\alpha, \alpha^{2}, \ldots, \alpha^{\delta-1}\right\}$