Paper 4, Section II, J

Principles of Statistics | Part II, 2016

Consider a decision problem with parameter space Θ\Theta. Define the concepts of a Bayes decision rule δπ\delta_{\pi} and of a least favourable prior.

Suppose π\pi is a prior distribution on Θ\Theta such that the Bayes risk of the Bayes rule equals supθΘR(δπ,θ)\sup _{\theta \in \Theta} R\left(\delta_{\pi}, \theta\right), where R(δ,θ)R(\delta, \theta) is the risk function associated to the decision problem. Prove that δπ\delta_{\pi} is least favourable.

Now consider a random variable XX arising from the binomial distribution Bin(n,θ)\operatorname{Bin}(n, \theta), where θΘ=[0,1]\theta \in \Theta=[0,1]. Construct a least favourable prior for the squared risk R(δ,θ)=Eθ(δ(X)θ)2R(\delta, \theta)=E_{\theta}(\delta(X)-\theta)^{2}. [You may use without proof the fact that the Bayes rule for quadratic risk is given by the posterior mean.]

Typos? Please submit corrections to this page on GitHub.