Paper 1, Section II, K

Principles of Statistics | Part II, 2013

When the real parameter Θ\Theta takes value θ\theta, variables X1,X2,X_{1}, X_{2}, \ldots arise independently from a distribution PθP_{\theta} having density function pθ(x)p_{\theta}(x) with respect to an underlying measure μ\mu. Define the score variable Un(θ)U_{n}(\theta) and the information function In(θ)I_{n}(\theta) for estimation of Θ\Theta based on Xn:=(X1,,Xn)\boldsymbol{X}^{n}:=\left(X_{1}, \ldots, X_{n}\right), and relate In(θ)I_{n}(\theta) to i(θ):=I1(θ)i(\theta):=I_{1}(\theta).

State and prove the Cramér-Rao inequality for the variance of an unbiased estimator of Θ\Theta. Under what conditions does this inequality become an equality? What is the form of the estimator in this case? [You may assume Eθ{Un(θ)}=0,varθ{Un(θ)}=In(θ)\mathbb{E}_{\theta}\left\{U_{n}(\theta)\right\}=0, \operatorname{var}_{\theta}\left\{U_{n}(\theta)\right\}=I_{n}(\theta), and any further required regularity conditions, without comment.]

Let Θ^n\widehat{\Theta}_{n} be the maximum likelihood estimator of Θ\Theta based on Xn\boldsymbol{X}^{n}. What is the asymptotic distribution of n12(Θ^nΘ)n^{\frac{1}{2}}\left(\widehat{\Theta}_{n}-\Theta\right) when Θ=θ\Theta=\theta ?

Suppose that, for each n,Θ^nn, \widehat{\Theta}_{n} is unbiased for Θ\Theta, and the variance of n12(Θ^nΘ)n^{\frac{1}{2}}\left(\widehat{\Theta}_{n}-\Theta\right) is exactly equal to its asymptotic variance. By considering the estimator αΘ^k+(1α)Θ^n\alpha \widehat{\Theta}_{k}+(1-\alpha) \widehat{\Theta}_{n}, or otherwise, show that, for k<n,covθ(Θ^k,Θ^n)=varθ(Θ^n)k<n, \operatorname{cov}_{\theta}\left(\widehat{\Theta}_{k}, \widehat{\Theta}_{n}\right)=\operatorname{var}_{\theta}\left(\widehat{\Theta}_{n}\right).

Typos? Please submit corrections to this page on GitHub.