Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

ABSTRACT:

The newest Large Language Models (LLMs) engage in conversations through the use of prompts, and produce amazingly human-like output. Empirical studies have shown that people cannot reliably distinguish texts produced by chatbots from texts produced by humans. In some applications,  they even find the chatbot output to be better than human output. LLMs seem to pass the Turing Test — does this mean are they intelligent? Should we worry that AI is on the verge of becoming more intelligent than we are? 

 

This talk will make a case that the Turing Test is a weak test of intelligence, failing to probe in depth how humans acquire, deploy, and adapt their remarkably large mental lexicons.  During the great vocabulary spurt of toddlerhood, toddlers learn some six to eight words a day. The ability to create and understand new words continues throughout adulthood, making the lexical system the most plastic aspect of language. Experimental and computational studies show that these feats are achieved using avenues of abstraction that LLMs simply lack. These capabilities for abstraction provide efficiency and power in the acquisition and use of language by humans.

 ABOUT THE SPEAKER:

Janet B. Pierrehumbert is the Professor of Language Modelling in the University of Oxford Engineering Science Department. She holds degrees in Linguistics from Harvard and MIT.  Much of her Ph.D thesis work was done in the Department of  Linguistics and AI Research at AT&T Bell Labs, where she also served as a Member of Technical staff until 1989.  She then took up a faculty position in Linguistics at Northwestern University, establishing an interdisciplinary research effort in experimental and computational linguistics. She is known for her research on prosody and intonation, as well as her work on how people acquire and use lexical systems that combine general abstract knowledge of word forms with detailed phonetic knowledge about individual words.    In 2015, Pierrehumbert moved to her present position in the the Oxford e-Research Centre at Oxford, where she also holds a courtesy appointment in the Faculty of Linguistics, Phonetics, and Philology. Her lab group  currently focusses on Natural Language Processing, emphasizing  questions about the robustness and interpretability of language models and the dynamics of language in communities.   She is a fellow of the LSA, the Cognitive Science Society, and the American Academy of Arts and Sciences.  She was elected to the National Academy of Sciences in 2019, She was awarded the Medal for Scientific Achievement from the ISCA (the International Speech Communication Association) in 2020, and was elected as a member of the Academia Europaea in 2024.

This is a hybrid event. The seminar will be held at the seminar room, New Radcliffe House, (2nd floor) but can also be followed on Zoom. You can access the zoom link via OxTalks at A Psycholinguistic Perspective on LLMs: Does the Turing Test Really Test Intelligence? - Oxford Talks  or, e-mail us at hod.office@psy.ox.ac.uk to request the zoom details.