1.II .27 J. 27 \mathrm{~J} \quad

Principles of Statistics | Part II, 2006

(a) What is a loss function? What is a decision rule? What is the risk function of a decision rule? What is the Bayes risk of a decision rule with respect to a prior π\pi ?

(b) Let θR(θ,d)\theta \mapsto R(\theta, d) denote the risk function of decision rule dd, and let r(π,d)r(\pi, d) denote the Bayes risk of decision rule dd with respect to prior π\pi. Suppose that dd^{*} is a decision rule and π0\pi_{0} is a prior over the parameter space Θ\Theta with the two properties

(i) r(π0,d)=mindr(π0,d)r\left(\pi_{0}, d^{*}\right)=\min _{d} r\left(\pi_{0}, d\right)

(ii) supθR(θ,d)=r(π0,d)\sup _{\theta} R\left(\theta, d^{*}\right)=r\left(\pi_{0}, d^{*}\right).

Prove that dd^{*} is minimax.

(c) Suppose now that Θ=A=R\Theta=\mathcal{A}=\mathbb{R}, where A\mathcal{A} is the space of possible actions, and that the loss function is

L(θ,a)=exp(λaθ),L(\theta, a)=\exp (-\lambda a \theta),

where λ\lambda is a positive constant. If the law of the observation XX given parameter θ\theta is N(θ,σ2)N\left(\theta, \sigma^{2}\right), where σ>0\sigma>0 is known, show (using (b) or otherwise) that the rule

d(x)=x/σ2λd^{*}(x)=x / \sigma^{2} \lambda

is minimax.

Typos? Please submit corrections to this page on GitHub.