Paper 3, Section II, I

Statistical Modelling | Part II, 2009

Consider the linear model Y=Xβ+εY=X \beta+\varepsilon, where εNn(0,σ2I)\varepsilon \sim N_{n}\left(0, \sigma^{2} I\right) and XX is an n×pn \times p matrix of full rank p<np<n. Suppose that the parameter β\beta is partitioned into kk sets as follows: β=(β1βk)\beta^{\top}=\left(\beta_{1}^{\top} \cdots \beta_{k}^{\top}\right). What does it mean for a pair of sets βi,βj,ij\beta_{i}, \beta_{j}, i \neq j, to be orthogonal? What does it mean for all kk sets to be mutually orthogonal?

In the model

Yi=β0+β1xi1+β2xi2+εiY_{i}=\beta_{0}+\beta_{1} x_{i 1}+\beta_{2} x_{i 2}+\varepsilon_{i}

where εiN(0,σ2)\varepsilon_{i} \sim N\left(0, \sigma^{2}\right) are independent and identically distributed, find necessary and sufficient conditions on x11,,xn1,x12,,xn2x_{11}, \ldots, x_{n 1}, x_{12}, \ldots, x_{n 2} for β0,β1\beta_{0}, \beta_{1} and β2\beta_{2} to be mutually orthogonal.

If β0,β1\beta_{0}, \beta_{1} and β2\beta_{2} are mutually orthogonal, what consequence does this have for the joint distribution of the corresponding maximum likelihood estimators β^0,β^1\hat{\beta}_{0}, \hat{\beta}_{1} and β^2\hat{\beta}_{2} ?

Typos? Please submit corrections to this page on GitHub.