Paper 2, Section II, J

Optimization and Control | Part II, 2010

(a) Suppose that

(XY)N((μXμY),(VXXVXYVYXVYY))\left(\begin{array}{l} X \\ Y \end{array}\right) \sim N\left(\left(\begin{array}{l} \mu_{X} \\ \mu_{Y} \end{array}\right),\left(\begin{array}{ll} V_{X X} & V_{X Y} \\ V_{Y X} & V_{Y Y} \end{array}\right)\right)

Prove that conditional on Y=yY=y, the distribution of XX is again multivariate normal, with mean μX+VXYVYY1(yμY)\mu_{X}+V_{X Y} V_{Y Y}^{-1}\left(y-\mu_{Y}\right) and covariance VXXVXYVYY1VYXV_{X X}-V_{X Y} V_{Y Y}^{-1} V_{Y X}.

(b) The Rd\mathbb{R}^{d}-valued process XX evolves in discrete time according to the dynamics

Xt+1=AXt+εt+1X_{t+1}=A X_{t}+\varepsilon_{t+1}

where AA is a constant d×dd \times d matrix, and εt\varepsilon_{t} are independent, with common N(0,Σε)N\left(0, \Sigma_{\varepsilon}\right) distribution. The process XX is not observed directly; instead, all that is seen is the process YY defined as

Yt=CXt+ηt,Y_{t}=C X_{t}+\eta_{t},

where ηt\eta_{t} are independent of each other and of the εt\varepsilon_{t}, with common N(0,Ση)N\left(0, \Sigma_{\eta}\right) distribution.

If the observer has the prior distribution X0N(X^0,V0)X_{0} \sim N\left(\hat{X}_{0}, V_{0}\right) for X0X_{0}, prove that at all later times the distribution of XtX_{t} conditional on Yt(Y1,,Yt)\mathcal{Y}_{t} \equiv\left(Y_{1}, \ldots, Y_{t}\right) is again normally distributed, with mean X^t\hat{X}_{t} and covariance VtV_{t} which evolve as

X^t+1=AX^t+MtCT(Ση+CMtCT)1(Yt+1CAX^t)Vt+1=MtMtCT(Ση+CMtCT)1CMt\begin{aligned} \hat{X}_{t+1} &=A \hat{X}_{t}+M_{t} C^{T}\left(\Sigma_{\eta}+C M_{t} C^{T}\right)^{-1}\left(Y_{t+1}-C A \hat{X}_{t}\right) \\ V_{t+1} &=M_{t}-M_{t} C^{T}\left(\Sigma_{\eta}+C M_{t} C^{T}\right)^{-1} C M_{t} \end{aligned}

where

Mt=AVtAT+ΣεM_{t}=A V_{t} A^{T}+\Sigma_{\varepsilon}

(c) In the special case where both XX and YY are one-dimensional, and A=C=1A=C=1, Σε=0\Sigma_{\varepsilon}=0, find the form of the updating recursion. Show in particular that

1Vt+1=1Vt+1Ση\frac{1}{V_{t+1}}=\frac{1}{V_{t}}+\frac{1}{\Sigma_{\eta}}

and that

X^t+1Vt+1=X^tVt+Yt+1Ση\frac{\hat{X}_{t+1}}{V_{t+1}}=\frac{\hat{X}_{t}}{V_{t}}+\frac{Y_{t+1}}{\Sigma_{\eta}}

Hence deduce that, with probability one,

limtX^t=limtt1j=1tYj\lim _{t \rightarrow \infty} \hat{X}_{t}=\lim _{t \rightarrow \infty} t^{-1} \sum_{j=1}^{t} Y_{j}

Typos? Please submit corrections to this page on GitHub.