Paper 3, Section II, J

Principles of Statistics | Part II, 2014

State and prove Wilks' theorem about testing the simple hypothesis H0:θ=θ0H_{0}: \theta=\theta_{0}, against the alternative H1:θΘ\{θ0}H_{1}: \theta \in \Theta \backslash\left\{\theta_{0}\right\}, in a one-dimensional regular parametric model {f(,θ):θΘ},ΘR\{f(\cdot, \theta): \theta \in \Theta\}, \Theta \subseteq \mathbb{R}. [You may use without proof the results from lectures on the consistency and asymptotic distribution of maximum likelihood estimators, as well as on uniform laws of large numbers. Necessary regularity conditions can be assumed without statement.]

Find the maximum likelihood estimator θ^n\hat{\theta}_{n} based on i.i.d. observations X1,,XnX_{1}, \ldots, X_{n} in a N(0,θ)N(0, \theta)-model, θΘ=(0,)\theta \in \Theta=(0, \infty). Deduce the limit distribution as nn \rightarrow \infty of the sequence of statistics

n(log(X2)(X21))-n\left(\log \left(\overline{X^{2}}\right)-\left(\overline{X^{2}}-1\right)\right)

where X2=(1/n)i=1nXi2\overline{X^{2}}=(1 / n) \sum_{i=1}^{n} X_{i}^{2} and X1,,XnX_{1}, \ldots, X_{n} are i.i.d. N(0,1)N(0,1).

Typos? Please submit corrections to this page on GitHub.