Paper 1, Section II, J

Principles of Statistics | Part II, 2021

Let X1,,XnX_{1}, \ldots, X_{n} be random variables with joint probability density function in a statistical model {fθ:θR}\left\{f_{\theta}: \theta \in \mathbb{R}\right\}.

(a) Define the Fisher information In(θ)I_{n}(\theta). What do we mean when we say that the Fisher information tensorises?

(b) Derive the relationship between the Fisher information and the derivative of the score function in a regular model.

(c) Consider the model defined by X1=θ+ε1X_{1}=\theta+\varepsilon_{1} and

Xi=θ(1γ)+γXi1+1γεi for i=2,,nX_{i}=\theta(1-\sqrt{\gamma})+\sqrt{\gamma} X_{i-1}+\sqrt{1-\gamma} \varepsilon_{i} \quad \text { for } i=2, \ldots, n

where ε1,,εn\varepsilon_{1}, \ldots, \varepsilon_{n} are i.i.d. N(0,1)N(0,1) random variables, and γ[0,1)\gamma \in[0,1) is a known constant. Compute the Fisher information In(θ)I_{n}(\theta). For which values of γ\gamma does the Fisher information tensorise? State a lower bound on the variance of an unbiased estimator θ^\hat{\theta} in this model.

Typos? Please submit corrections to this page on GitHub.