Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.

Little is known about how memory resources are allocated in natural vision across sequential eye movements and fixations, as people actively extract information from the visual environment. Here, we used gaze-contingent eye tracking to examine how such resources are dynamically reallocated from old to new information entering working memory. As participants looked sequentially at items, we interrupted the process at different times by extinguishing the display as a saccade was initiated. After a brief interval, participants were probed on one of the items that had been presented. Paradoxically, across all experiments, the final (unfixated) saccade target was recalled more precisely when more items had previously been fixated, that is, with longer rather than shorter saccade sequences. This result is difficult to explain on current models of working memory because recall error, even for the final item, is typically higher as memory load increases. The findings could however be accounted for by a model that describes how resources are dynamically reallocated on a moment-by-moment basis. During each saccade, the target is encoded by consuming a proportion of currently available resources from a limited working memory, as well as by reallocating resources away from previously encoded items. These findings reveal how working memory resources are shifted across memoranda in active vision. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Original publication

DOI

10.1037/xhp0000960

Type

Journal article

Journal

J Exp Psychol Hum Percept Perform

Publication Date

01/2022

Volume

48

Pages

21 - 36

Keywords

Eye Movements, Eye-Tracking Technology, Humans, Memory, Short-Term, Mental Recall, Saccades