4.I.6E

Mathematical Biology | Part II, 2005

The output of a linear perceptron is given by y=wxy=\mathbf{w} \cdot \mathbf{x}, where w\mathbf{w} is a vector of weights connecting a fluctuating input vector xx to an output unit. The weights are given random initial values and are then updated according to a learning rule that has a time-constant τ\tau much greater than the fluctuation timescale of the inputs.

(a) Find the behaviour of w|\mathbf{w}| for each of the following two rules (i) τdwdt=yx\tau \frac{d \mathbf{w}}{d t}=y \mathbf{x} (ii) τdwdt=yxαy2ww2\tau \frac{d \mathbf{w}}{d t}=y \mathbf{x}-\alpha y^{2} \mathbf{w}|\mathbf{w}|^{2}, where α\alpha is a positive constant.

(b) Consider a third learning rule

 (iii) τdwdt=yxww2\text { (iii) } \tau \frac{d \mathbf{w}}{d t}=y \mathbf{x}-\mathbf{w}|\mathbf{w}|^{2} \text {. }

Show that in a steady state the vector of weights satisfies the eigenvalue equation

Cw=λw\mathbf{C w}=\lambda \mathbf{w}

where the matrix C\mathbf{C} and eigenvalue λ\lambda should be identified.

(c) Comment briefly on the biological implications of the three rules.

Typos? Please submit corrections to this page on GitHub.