2.II.27I

Principles of Statistics | Part II, 2007

(i) State Wilks' likelihood ratio test of the null hypothesis H0:θΘ0H_{0}: \theta \in \Theta_{0} against the alternative H1:θΘ1H_{1}: \theta \in \Theta_{1}, where Θ0Θ1\Theta_{0} \subset \Theta_{1}. Explain when this test may be used.

(ii) Independent identically-distributed observations X1,,XnX_{1}, \ldots, X_{n} take values in the set S={1,,K}S=\{1, \ldots, K\}, with common distribution which under the null hypothesis is of the form

P(X1=kθ)=f(kθ)(kS)P\left(X_{1}=k \mid \theta\right)=f(k \mid \theta) \quad(k \in S)

for some θΘ0\theta \in \Theta_{0}, where Θ0\Theta_{0} is an open subset of some Euclidean space Rd\mathbb{R}^{d}, d<K1d<K-1. Under the alternative hypothesis, the probability mass function of the XiX_{i} is unrestricted.

Assuming sufficient regularity conditions on ff to guarantee the existence and uniqueness of a maximum-likelihood estimator θ^n(X1,,Xn)\hat{\theta}_{n}\left(X_{1}, \ldots, X_{n}\right) of θ\theta for each nn, show that for large nn the Wilks' likelihood ratio test statistic is approximately of the form

j=1K(Njnπ^j)2/Nj\sum_{j=1}^{K}\left(N_{j}-n \hat{\pi}_{j}\right)^{2} / N_{j}

where Nj=i=1nI{Xi=j}N_{j}=\sum_{i=1}^{n} I_{\left\{X_{i}=j\right\}}, and π^j=f(jθ^n)\hat{\pi}_{j}=f\left(j \mid \hat{\theta}_{n}\right). What is the asymptotic distribution of this statistic?

Typos? Please submit corrections to this page on GitHub.