Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

It has been proposed that invariant pattern recognition might be implemented using a learning rule that utilizes a trace of previous neural activity which, given the spatio-temporal continuity of the statistics of sensory input, is likely to be about the same object though with differing transforms in the short time scale. Recently, it has been demonstrated that a modified Hebbian rule which incorporates a trace of previous activity but no contribution from the current activity can offer substantially improved performance. In this paper we show how this rule can be related to error correction rules, and explore a number of error correction rules that can be applied to and can produce good invariant pattern recognition. An explicit relationship to temporal difference learning is then demonstrated, and from this further learning rules related to temporal difference learning are developed. This relationship to temporal difference learning allows us to begin to exploit established analyses of temporal difference learning to provide a theoretical framework for better understanding the operation and convergence properties of these learning rules, and more generally, of rules useful for learning invariant representations. The efficacy of these different rules for invariant object recognition is compared using VisNet, a hierarchical competitive network model of the operation of the visual system.

Type

Journal article

Journal

Network

Publication Date

05/2001

Volume

12

Pages

111 - 129

Keywords

Learning, Neural Networks (Computer), Neurons, Pattern Recognition, Visual, Visual Perception