Paper 2, Section II, K

Principles of Statistics | Part II, 2011

Random variables X1,,XnX_{1}, \ldots, X_{n} are independent and identically distributed from the normal distribution with unknown mean M\mathrm{M} and unknown precision (inverse variance) HH. Show that the likelihood function, for data X1=x1,,Xn=xnX_{1}=x_{1}, \ldots, X_{n}=x_{n}, is

Ln(μ,h)hn/2exp(12h{n(xˉμ)2+S})L_{n}(\mu, h) \propto h^{n / 2} \exp \left(-\frac{1}{2} h\left\{n(\bar{x}-\mu)^{2}+S\right\}\right)

where xˉ:=n1ixi\bar{x}:=n^{-1} \sum_{i} x_{i} and S:=i(xixˉ)2S:=\sum_{i}\left(x_{i}-\bar{x}\right)^{2}.

A bivariate prior distribution for (M,H)(\mathrm{M}, H) is specified, in terms of hyperparameters (α0,β0,m0,λ0)\left(\alpha_{0}, \beta_{0}, m_{0}, \lambda_{0}\right), as follows. The marginal distribution of HH is Γ(α0,β0)\Gamma\left(\alpha_{0}, \beta_{0}\right), with density

π(h)hα01eβ0h(h>0),\pi(h) \propto h^{\alpha_{0}-1} e^{-\beta_{0} h} \quad(h>0),

and the conditional distribution of M\mathrm{M}, given H=hH=h, is normal with mean m0m_{0} and precision λ0h\lambda_{0} h.

Show that the conditional prior distribution of HH, given M=μM=\mu, is

HM=μ(α0+12,β0+12λ0(μm0)2)H \mid \mathrm{M}=\mu \quad \sim\left(\alpha_{0}+\frac{1}{2}, \beta_{0}+\frac{1}{2} \lambda_{0}\left(\mu-m_{0}\right)^{2}\right)

Show that the posterior joint distribution of (M,H)(\mathrm{M}, H), given X1=x1,,Xn=xnX_{1}=x_{1}, \ldots, X_{n}=x_{n}, has the same form as the prior, with updated hyperparameters (αn,βn,mn,λn)\left(\alpha_{n}, \beta_{n}, m_{n}, \lambda_{n}\right) which you should express in terms of the prior hyperparameters and the data.

[You may use the identity

p(ta)2+q(tb)2=(tδ)2+pq(ab)2p(t-a)^{2}+q(t-b)^{2}=(t-\delta)^{2}+p q(a-b)^{2}

where p+q=1p+q=1 and δ=pa+qb\delta=p a+q b.]

Explain how you could implement Gibbs sampling to generate a random sample from the posterior joint distribution.

Typos? Please submit corrections to this page on GitHub.