Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

We propose a theory of structure learning in the primate brain. We argue that the parietal cortex is critical for learning about relations among the objects and categories that populate a visual scene. We suggest that current deep learning models exhibit poor global scene understanding because they fail to perform the relational inferences that occur in the primate dorsal stream. We review studies of neural coding in primate posterior parietal cortex (PPC), drawing the conclusion that neurons in this brain area represent potentially high-dimensional inputs on a low-dimensional manifold that encodes the relative position of objects or features in physical space, and relations among entities in abstract conceptual space. We argue that this low-dimensional code supports generalisation of relational information, even in nonspatial domains. Finally, we propose that structure learning is grounded in the actions that primates take when they reach for objects or fixate them with their eyes. We sketch a model of how this might occur in neural circuits.

Original publication

DOI

10.1016/j.pneurobio.2019.101717

Type

Journal article

Journal

Prog Neurobiol

Publication Date

01/2020

Volume

184

Keywords

Deep neural networks, Gestalt psychology, Parietal cortex, Scene perception, Structure learning, Animals, Deep Learning, Gestalt Theory, Humans, Learning, Mathematical Concepts, Models, Biological, Parietal Lobe, Primates, Space Perception, Visual Perception