Paper 4, Section II, I

Statistical Modelling | Part II, 2009

Consider the linear model Y=Xβ+εY=X \beta+\varepsilon, where εNn(0,σ2I)\varepsilon \sim N_{n}\left(0, \sigma^{2} I\right) and XX is an n×pn \times p matrix of full rank p<np<n. Find the form of the maximum likelihood estimator β^\hat{\beta} of β\beta, and derive its distribution assuming that σ2\sigma^{2} is known.

Assuming the prior π(β,σ2)σ2\pi\left(\beta, \sigma^{2}\right) \propto \sigma^{-2} find the joint posterior of (β,σ2)\left(\beta, \sigma^{2}\right) up to a normalising constant. Derive the posterior conditional distribution π(βσ2,X,Y)\pi\left(\beta \mid \sigma^{2}, X, Y\right).

Comment on the distribution of β^\hat{\beta} found above and the posterior conditional π(βσ2,X,Y)\pi\left(\beta \mid \sigma^{2}, X, Y\right). Comment further on the predictive distribution of yy^{*} at input xx^{*} under both the maximum likelihood and Bayesian approaches.

Typos? Please submit corrections to this page on GitHub.