Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The human superior temporal gyrus is critical for extracting meaningful linguistic features from acoustic speech inputs. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels, as well as dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues for prosody. Together, the distributed feature selectivity for phonetic and prosodic cues have generated a new and granular map of temporal cortex function.  Beyond speech features, cortical representations are strongly modulated by learned knowledge and perceptual goals. I will review emerging insights on the remarkable emergent phonological computations that take place in this cortical region at the core of Wernicke’s area.

Professor Chang’s research focuses on the brain mechanisms for speech, movement and human emotion. He co-directs the Center for Neural Engineering and Prostheses, a collaborative enterprise of UCSF and the University of California, Berkeley. The Center brings together experts in engineering, neurology and neurosurgery to develop state-of-the-art biomedical technology to restore function for patients with neurological disabilities such as paralysis and speech disorders. 



You can access the Zoom link via OxTalks at Or, email us at to request the link.