Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Abstract

What do we hear when someone speaks? What does auditory cortex (AC) do with that information? I present neuroimaging data suggesting that the impression that we simply hear sounds and that AC is the bottom of feedforward processing hierarchy are the wrong answers to these questions. Rather, when the brain is engaged by naturalistic language stimuli, it appears to dramatically self-organize to use available contextual information. Context in experiments includes preceding sounds and discourse content, observable emotional facial displays and co-speech gestures, and memories of prior experiences observing speech-associated mouth movements and reading. This contextual information seems to be the starting point for the formation of hypotheses that are used to make predictions about the nature of the ambiguous information that might arrive in AC. Strong predictions result in a large conservation of metabolic resources in AC, presumably because no further evidence from the auditory world is required to confirm hypotheses. Thus, results suggest that a great deal of what we hear is not sound but, rather, an echo of internal knowledge that shapes and constrains interpretation of the impoverished information reaching AC. That is, hearing speech and AC functioning is a constructive process that relies on contextual information available during real-world communication.

 

Host: Kate Watkins & Daniel Lametti