Paper 1, Section II, 27J\mathbf{2 7 J}

Principles of Statistics | Part II, 2016

Derive the maximum likelihood estimator θ^n\hat{\theta}_{n} based on independent observations X1,,XnX_{1}, \ldots, X_{n} that are identically distributed as N(θ,1)N(\theta, 1), where the unknown parameter θ\theta lies in the parameter space Θ=R\Theta=\mathbb{R}. Find the limiting distribution of n(θ^nθ)\sqrt{n}\left(\widehat{\theta}_{n}-\theta\right) as nn \rightarrow \infty.

Now define

θ~n=θ^n whenever θ^n>n1/4,=0 otherwise, \begin{array}{rll} \tilde{\theta}_{n} & =\widehat{\theta}_{n} & \text { whenever }\left|\widehat{\theta}_{n}\right|>n^{-1 / 4}, \\ & =0 & \text { otherwise, } \end{array}

=0 otherwise, \begin{aligned} & =0 \text { otherwise, } \end{aligned}

and find the limiting distribution of n(θ~nθ)\sqrt{n}\left(\tilde{\theta}_{n}-\theta\right) as nn \rightarrow \infty.

Calculate

limnsupθΘnEθ(Tnθ)2\lim _{n \rightarrow \infty} \sup _{\theta \in \Theta} n E_{\theta}\left(T_{n}-\theta\right)^{2}

for the choices Tn=θ^nT_{n}=\widehat{\theta}_{n} and Tn=θ~nT_{n}=\widetilde{\theta}_{n}. Based on the above findings, which estimator TnT_{n} of θ\theta would you prefer? Explain your answer.

[Throughout, you may use standard facts of stochastic convergence, such as the central limit theorem, provided they are clearly stated.]

Typos? Please submit corrections to this page on GitHub.