Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

Infancy research demonstrating a facilitation of visual category formation in the presence of verbal labels suggests that infants' object categories and words develop interactively. This contrasts with the notion that words are simply mapped "onto" previously existing categories. To investigate the computational foundations of a system in which word and object categories develop simultaneously and in an interactive fashion, we present a model of word learning based on interacting self-organizing maps that represent the auditory and visual modalities, respectively. While other models of lexical development have employed similar dual-map architectures, our model uses active Hebbian connections to propagate activation between the visual and auditory maps during learning. Our results show that categorical perception emerges from these early audio-visual interactions in both domains.We argue that the learning mechanism introduced in our model could play a role in the facilitation of infants' categorization through verbal labeling. © 2013 IEEE.

Original publication

DOI

10.1109/TAMD.2013.2264858

Type

Journal article

Journal

IEEE Transactions on Autonomous Mental Development

Publication Date

01/12/2013

Volume

5

Pages

288 - 297