Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We show in a 4-layer competitive neuronal network that continuous transformation learning, which uses spatial correlations and a purely associative (Hebbian) synaptic modification rule, can build view invariant representations of complex 3D objects. This occurs even when views of the different objects are interleaved, a condition where temporal trace learning fails. Human psychophysical experiments showed that view invariant object learning can occur when spatial but not temporal continuity applies because of interleaving of stimuli, although sequential presentation, which produces temporal continuity, can facilitate learning. Thus continuous transformation learning is an important principle that may contribute to view invariant object recognition.

Original publication

DOI

10.1016/j.visres.2006.07.025

Type

Journal article

Journal

Vision Res

Publication Date

11/2006

Volume

46

Pages

3994 - 4006

Keywords

Computer Simulation, Form Perception, Humans, Learning, Models, Neurological, Neural Networks (Computer), Photic Stimulation, Psychophysics, Retention (Psychology), Time, Visual Cortex, Visual Pathways