3.I.5J

Statistical Modelling | Part II, 2008

Consider the linear model Y=Xβ+εY=X \beta+\varepsilon. Here, YY is an nn-dimensional vector of observations, XX is a known n×pn \times p matrix, β\beta is an unknown pp-dimensional parameter, and εNn(0,σ2I)\varepsilon \sim N_{n}\left(0, \sigma^{2} I\right), with σ2\sigma^{2} unknown. Assume that XX has full rank and that pnp \ll n. Suppose that we are interested in checking the assumption εNn(0,σ2I)\varepsilon \sim N_{n}\left(0, \sigma^{2} I\right). Let Y^=Xβ^\hat{Y}=X \hat{\beta}, where β^\hat{\beta} is the maximum likelihood estimate of β\beta. Write in terms of XX an expression for the projection matrix P=(pij:1i,jn)P=\left(p_{i j}: 1 \leqslant i, j \leqslant n\right) which appears in the maximum likelihood equation Y^=Xβ^=PY\hat{Y}=X \hat{\beta}=P Y.

Find the distribution of ε^=YY^\hat{\varepsilon}=Y-\hat{Y}, and show that, in general, the components of ε^\hat{\varepsilon} are not independent.

A standard procedure used to check our assumption on ε\varepsilon is to check whether the studentized fitted residuals

η^i=ε^iσ~1pii,i=1,,n\hat{\eta}_{i}=\frac{\hat{\varepsilon}_{i}}{\tilde{\sigma} \sqrt{1-p_{i i}}}, \quad i=1, \ldots, n

look like a random sample from an N(0,1)N(0,1) distribution. Here,

σ~2=1npYXβ^2.\tilde{\sigma}^{2}=\frac{1}{n-p}\|Y-X \hat{\beta}\|^{2} .

Say, briefly, how you might do this in R.

This procedure appears to ignore the dependence between the components of ε^\hat{\varepsilon} noted above. What feature of the given set-up makes this reasonable?

Typos? Please submit corrections to this page on GitHub.