Paper 4, Section II, 27 K27 \mathrm{~K}

Principles of Statistics | Part II, 2017

For the statistical model {Nd(θ,Σ),θRd}\left\{\mathcal{N}_{d}(\theta, \Sigma), \theta \in \mathbb{R}^{d}\right\}, where Σ\Sigma is a known, positive-definite d×dd \times d matrix, we want to estimate θ\theta based on nn i.i.d. observations X1,,XnX_{1}, \ldots, X_{n} with distribution Nd(θ,Σ)\mathcal{N}_{d}(\theta, \Sigma).

(a) Derive the maximum likelihood estimator θ^n\hat{\theta}_{n} of θ\theta. What is the distribution of θ^n\hat{\theta}_{n} ?

(b) For α(0,1)\alpha \in(0,1), construct a confidence region CnαC_{n}^{\alpha} such that Pθ(θCnα)=1α\mathbf{P}_{\theta}\left(\theta \in C_{n}^{\alpha}\right)=1-\alpha.

(c) For Σ=Id\Sigma=I_{d}, compute the maximum likelihood estimator of θ\theta for the following parameter spaces:

(i) Θ={θ:θ2=1}\Theta=\left\{\theta:\|\theta\|_{2}=1\right\}.

(ii) Θ={θ:vθ=0}\Theta=\left\{\theta: v^{\top} \theta=0\right\} for some unit vector vRdv \in \mathbb{R}^{d}.

(d) For Σ=Id\Sigma=I_{d}, we want to test the null hypothesis Θ0={0}\Theta_{0}=\{0\} (i.e. θ=0)\left.\theta=0\right) against the composite alternative Θ1=Rd\{0}\Theta_{1}=\mathbb{R}^{d} \backslash\{0\}. Compute the likelihood ratio statistic Λ(Θ1,Θ0)\Lambda\left(\Theta_{1}, \Theta_{0}\right) and give its distribution under the null hypothesis. Compare this result with the statement of Wilks' theorem.

Typos? Please submit corrections to this page on GitHub.