Paper 2, Section II, 28K28 K

Principles of Statistics | Part II, 2018

We consider the model {N(θ,Ip),θRp}\left\{\mathcal{N}\left(\theta, I_{p}\right), \theta \in \mathbb{R}^{p}\right\} of a Gaussian distribution in dimension p3p \geqslant 3, with unknown mean θ\theta and known identity covariance matrix IpI_{p}. We estimate θ\theta based on one observation XN(θ,Ip)X \sim \mathcal{N}\left(\theta, I_{p}\right), under the loss function

(θ,δ)=θδ22\ell(\theta, \delta)=\|\theta-\delta\|_{2}^{2}

(a) Define the risk of an estimator θ^\hat{\theta}. Compute the maximum likelihood estimator θ^MLE\hat{\theta}_{M L E} of θ\theta and its risk for any θRp\theta \in \mathbb{R}^{p}.

(b) Define what an admissible estimator is. Is θ^MLE\hat{\theta}_{M L E} admissible?

(c) For any c>0c>0, let πc(θ)\pi_{c}(\theta) be the prior N(0,c2Ip)\mathcal{N}\left(0, c^{2} I_{p}\right). Find a Bayes optimal estimator θ^c\hat{\theta}_{c} under this prior with the quadratic loss, and compute its Bayes risk.

(d) Show that θ^MLE\hat{\theta}_{M L E} is minimax.

[You may use results from the course provided that you state them clearly.]

Typos? Please submit corrections to this page on GitHub.