Crossmodal semantics in memory: Scoping review and meta-analyses of multisensory effects in short-term and episodic memory systems.
Soto-Faraco S., Spence C.
The human brain represents objects and events in the environment by binding together their defining semantic attributes across the senses (e.g., vision, hearing, touch). Semantic relationships between these attributes in different senses, or crossmodal semantic relationships, are fundamental to carving out meaningful categories and to encode and store experiences in the form of memories for later retrieval. Unsurprisingly, the subject of crossmodal semantic interactions in human memory has been on the agenda of researchers interested in multisensory processes for several decades now and there appears to be a renewed wave of interest in the field currently. By and large, the central question has been whether or not memories for events with crossmodally congruent semantic attributes are better remembered. Nevertheless, this research area has been characterized by mixed methodological approaches, inconsistent outcomes, and alternative theoretical interpretations, with few attempts at synthesis. Here, we examine the past 30 years of research on the topic, covering short-term as well as episodic memory systems. First, we garner existing evidence in a systematic scoping review of studies, complemented by meta-analyses. Then, we provide a synthesis highlighting outstanding empirical questions and potential contradictions between competing theoretical interpretations. With some exceptions, there is abundant support for the hypothesis that crossmodally congruent events are better remembered than single-modality or crossmodal but incongruent events. Nevertheless, the mechanisms underlying this multisensory benefit and its theoretical interpretation are still the subject of substantial debates. We propose avenues to resolve these issues and advance current knowledge in this burgeoning research area. (PsycInfo Database Record (c) 2025 APA, all rights reserved).