Paper 1, Section II, H

Statistics | Part IB, 2012

State and prove the Neyman-Pearson lemma.

A sample of two independent observations, (x1,x2)\left(x_{1}, x_{2}\right), is taken from a distribution with density f(x;θ)=θxθ1,0x1f(x ; \theta)=\theta x^{\theta-1}, 0 \leqslant x \leqslant 1. It is desired to test H0:θ=1H_{0}: \theta=1 against H1:θ=2H_{1}: \theta=2. Show that the best test of size α\alpha can be expressed using the number cc such that

1c+clogc=α.1-c+c \log c=\alpha .

Is this the uniformly most powerful test of size α\alpha for testing H0H_{0} against H1:θ>1?H_{1}: \theta>1 ?

Suppose that the prior distribution of θ\theta is P(θ=1)=4γ/(1+4γ),P(θ=2)=P(\theta=1)=4 \gamma /(1+4 \gamma), P(\theta=2)= 1/(1+4γ)1 /(1+4 \gamma), where 1>γ>01>\gamma>0. Find the test of H0H_{0} against H1H_{1} that minimizes the probability of error.

Let w(θ)w(\theta) denote the power function of this test at θ(1)\theta(\geqslant 1). Show that

w(θ)=1γθ+γθlogγθw(\theta)=1-\gamma^{\theta}+\gamma^{\theta} \log \gamma^{\theta}

Typos? Please submit corrections to this page on GitHub.