Dissociating the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words.
Chen Y-C., Spence C.
The present study compared the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing. Following an auditory prime, a picture (or blank frame) was briefly presented and then immediately masked. The participants had to judge whether or not a picture had been presented. Naturalistic sounds consistently elicited a cross-modal semantic priming effect on visual sensitivity (d') for pictures (higher d' in the congruent than in the incongruent condition) at the 350-ms rather than at the 1,000-ms stimulus onset asynchrony (SOA). Spoken words mainly elicited a cross-modal semantic priming effect at the 1,000-ms rather than at the 350-ms SOA, but this effect was modulated by the order of testing these two SOAs. It would therefore appear that visual picture processing can be rapidly primed by naturalistic sounds via cross-modal associations, and this effect is short lived. In contrast, spoken words prime visual picture processing over a wider range of prime-target intervals, though this effect was conditioned by the prior context.