{ "items": [ "\n\n
We investigated any differences in people's ability to reconstruct the appropriate spatiotemporal ordering of multiple tactile stimuli, when presented in frontal space (a region where visual inputs tend to dominate) versus in the space behind the back (a region of space that we rarely see) in professional piano players and in non-musicians. Even though tactile temporal order judgments were much better in the musicians overall, both groups showed a much reduced crossed-hands deficit when their hands were crossed behind their backs rather than at the front. These results suggest that because of differences in the availability of visual input, the spatiotemporal representation of non-visual stimuli in front versus rear space is different.
\n \n\n \n \nShore et al. [D.I. Shore, E. Spry, C. Spence, Spatial modulation of tactile temporal order judgments, Perception (submitted for publication)] recently demonstrated that people find it easier to judge which hand is touched first (in a tactile temporal order judgment task) when their hands are placed far apart rather than close together. In the present study, we used a mirror to manipulate the visually perceived distance between participants' hands, while holding the actual (i.e., proprioceptively-specified) distance between them constant. Participants were asked to determine which of two vibrotactile stimuli, one presented to either index finger using the method of constant stimuli, was presented first. Performance was significantly worse (i.e., the JND was larger) when the hands were perceived (due to the mirror reflection) as being close together rather than further apart. These results highlight the critical role that vision plays in influencing the conscious perception of the temporal order of tactile stimuli.
\n \n\n \n \nPeople simply cannot do two things at once, as shown by research on the so-called psychological refractory period. A new neuroimaging study has now localized the response-selection bottleneck underlying the psychological refractory period to a frontoparietal network.
\n \n\n \n \nParticipants made speeded discrimination responses to unimodal auditory (low-frequency vs. high-frequency sounds) or vibrotactile stimuli (presented to the index finger, upper location vs. to the thumb, lower location). In the compatible blocks of trials, the implicitly related stimuli (i.e. higher-frequency sounds and upper tactile stimuli; and the lower-frequency sounds and the lower tactile stimuli) were associated with the same response key; in the incompatible blocks, weakly related stimuli (i.e. high-frequency sounds and lower tactile stimuli; and the low-frequency sounds and the upper tactile stimuli) were associated with the same response key. Better performance was observed in the compatible (vs. incompatible) blocks, thus providing empirical support for the cross-modal association between the relative frequency of a sound and the relative elevation of a tactile stimulus.
\n \n\n \n \nWe investigated whether the perception of simultaneity for pairs of nociceptive and visual stimuli was dependent upon the focus of participants' attention to a particular sensory modality (either pain or vision). Two stimuli (one painful and the other visual) were presented randomly at different stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded verbal responses as to which stimulus they perceived as having been presented first, or else responded that the two stimuli were presented simultaneously. This temporal discrimination task was repeated under three different attention conditions (blocks): divided attention, attend pain, and attend vision. The results showed that under conditions of divided attention, nociceptive stimuli had to be presented before visual stimuli in order for the two to be perceived as simultaneous. A comparison of the two focused attention conditions revealed that the painful stimulus was perceived as occurring earlier in time (relative to the visual stimulus) when attention was directed toward pain than when it was directed toward vision. These results provide the first empirical demonstration that attention can modulate the temporal perception of painful stimuli.
\n \n\n \n \nWe investigated whether attending to a particular point in time affects temporal resolution in a task in which participants judged which of two visual stimuli had been presented first. The results showed that temporal resolution can be improved by attending to the relevant moment as indicated by the temporal cue. This novel finding is discussed in terms of the differential effects of spatial and temporal attention on temporal resolution.
\n \n\n \n \nTemporally synchronous, auditory cues can facilitate participants' performance on dynamic visual search tasks. Making auditory cues spatially informative with regard to the target location can reduce search latencies still further. In the present study, we investigated how multisensory integration, and temporal and spatial attention, might conjointly influence participants' performance on an elevation discrimination task for a masked visual target presented in a rapidly-changing sequence of masked visual distractors. Participants were presented with either spatially uninformative (centrally presented), spatially valid (with the target side), or spatially invalid tones that were synchronous with the presentation of the visual target. Participants responded significantly more accurately following the presentation of the spatially valid as compared to the uninformative or invalid auditory cues. Participants endogenously shifted their attention to the likely location of the target indicated by the valid spatial auditory cue (reflecting top-down, cognitive processing mechanisms), which facilitated their processing of the visual target over and above any bottom-up benefits associated solely with the synchronous presentation of the auditory and visual stimuli. The results of the present study therefore suggest that crossmodal attention (both spatial and temporal) and multisensory integration can work in parallel to facilitate people's ability to most efficiently respond to multisensory information.
\n \n\n \n \nSound localization can be affected by vision; in the ventriloquism effect, sounds that are hard to localize within hearing become mislocalized toward the location of concurrent visual events. Here we tested whether spatial attention is drawn to the illusory location of a ventriloquized sound. The study exploited our previous finding that visual cues do not attract auditory attention. We report an important exception to this rule; auditory attention can be drawn to the location of a visual cue when it is paired with a concurrent unlocalizable sound, to produce ventriloquism. This demonstrates that crossmodal integration can precede reflexive shifts of attention, with such shifts taking place toward the crossmodally determined illusory location of a sound. It also shows that ventriloquism arises automatically, with objective as well as subjective consequences.
\n \n\n \n \nWe investigated the differential effects of olfactory stimulation on dual-task performance under conditions of varying task difficulty. Participants detected visually presented target digits from amongst a stream of visually presented distractor letters in a rapid serial visual presentation (RSVP) task. At the same time, participants also made speeded discrimination responses to vibrotactile stimuli presented on the front or back of their torso. The response mapping was either compatible or incompatible (i.e., lifting their toes for front vibrations and their heel for back vibrations, or vice versa, respectively). Synthetic peppermint odor or clean air (control) was delivered periodically for 35 s in every 315 s. The results showed a significant performance improvement in the presence of peppermint odor (as compared to air) when the response mapping was incompatible (i.e., in the difficult task) but not in the compatible condition (i.e., in the easy task). Our results provide the first empirical demonstration that olfactory stimulation can facilitate tactile performance, and also highlight the potential modulatory role of task-difficulty in odor-induced task performance facilitation.
\n \n\n \n \nIt is almost one hundred years since Titchener [E.B. Titchener, Lectures on the Elementary Psychology of Feeling and Attention, Macmillan, New York, 1908] published his influential claim that attending to a particular sensory modality (or location) can speed up the relative time of arrival of stimuli presented in that modality (or location). However, the evidence supporting the existence of prior entry is, to date, mixed. In the present study, we used an audiovisual simultaneity judgment task in an attempt to circumvent the potential methodological confounds inherent in previous research in this area. Participants made simultaneous versus successive judgment responses regarding pairs of auditory and visual stimuli at varying stimulus onset asynchronies (SOAs) using the method of constant stimuli. In different blocks of trials, the participants were instructed to attend either to the auditory or to the visual modality, or else to divide their attention equally between the two modalities. The probability of trials containing intramodal stimulus pairs (e.g., vision-vision or audition-audition) was increased in the focused attention blocks to encourage participants to follow the instructions. The perception of simultaneity was modulated by this attentional manipulation: Visual stimuli had to lead auditory stimuli by a significantly smaller interval for simultaneity to be perceived when attention was directed to vision than when it was directed to audition. These results provide the first unequivocal evidence for the existence of audiovisual prior entry.
\n \n\n \n \nThis review addresses the question of when spatial coincidence facilitates multisensory integration in humans. According to the spatial rule (which was first formulated on the basis of neurophysiological data in anesthetized animals), multisensory integration is enhanced when stimuli in different sensory modalities are presented from the same spatial location. While the spatial rule fits with the available data from studies of overt and covert spatial attentional orienting, and from the majority of those studies in which space has been somehow relevant to the participant's task, it is inconsistent with the evidence that has emerged from the majority of multisensory studies of stimulus identification and temporal perception. Such a mixed pattern of behavioral results suggests that the spatial rule does not represent a general constraint on multisensory integration in humans. Instead, it would appear to be a much more task-dependent phenomenon than is often realized. These results, however, are broadly consistent with a distinction between the processing of \"where\" and \"what\" (or \"how\") information processing in the human brain.
\n \n\n \n \nPeople often fail to respond to an auditory target if they have to respond to a visual target presented at the same time, a phenomenon known as the Colavita visual dominance effect. To date, the Colavita effect has only ever been demonstrated in detection tasks in which participants respond to pre-defined visual, auditory, or bimodal audiovisual target stimuli. Here, we tested the Colavita effect when the target was defined by a rule, namely the repetition of any event (a picture, a sound, or both) in simultaneously-presented streams of pictures and sounds. Given previous findings that people are better at detecting auditory repetitions than visual repetitions, we expected that the Colavita visual dominance effect might disappear (or even reverse). Contrary to this prediction, however, visual dominance (i.e., the typical Colavita effect) was observed, with participants still neglecting significantly more auditory events than visual events in response to bimodal targets. The visual dominance for bimodal repetitions was observed despite the fact that participants missed significantly more unimodal visual repetitions than unimodal auditory repetitions. These results therefore extend the Colavita visual dominance effect to a domain where auditory dominance has traditionally been observed. In addition, our results reveal that the Colavita effect occurs at a more abstract, rule-based, level of representation than tested in previous research.
\n \n\n \n \nResearch has shown that unreported information stored in rapidly decaying visual representations may be accessed more accurately using partial report than using full report procedures (e.g., [Sperling, G., 1960. The information available in brief visual presentations. Psychological Monographs, 74, 1-29.]). In the 3 experiments reported here, we investigated whether unreported information regarding the actual number of tactile stimuli presented in parallel across the body surface can be accessed using a partial report procedure. In Experiment 1, participants had to report the total number of stimuli in a tactile display composed of up to 6 stimuli presented across their body (numerosity task), or else to detect whether or not a tactile stimulus had previously been presented in a position indicated by a visual probe given at a variable delay after offset of a tactile display (i.e., partial report). The results showed that participants correctly reported up to 3 stimuli in the numerosity judgment task, but their performance was significantly better than chance when up to 5 stimuli were presented in the partial report task. This result shows that short-lasting tactile representations can be accessed using partial report procedures similar to those used previously in visual studies. Experiment 2 showed that the duration of these representations (or the time available to consciously access them) depends on the number of stimuli presented in the display (the greater the number of stimuli that are presented, the faster their representation decays). Finally, the results of a third experiment showed that the differences in performance between the numerosity judgment and partial report tasks could not be explained solely in terms of any difference in task difficulty.
\n \n\n \n \nThis study investigated people's sensitivity to audiovisual asynchrony in briefly-presented speech and musical videos. A series of speech (letters and syllables) and guitar and piano music (single and double notes) video clips were presented randomly at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. The accuracy of participants' TOJ performance (measured in terms of the just noticeable difference; JND) was significantly better for the speech than for either the guitar or piano music video clips, suggesting that people are more sensitive to asynchrony for speech than for music stimuli. The visual stream had to lead the auditory stream for the point of subjective simultaneity (PSS) to be achieved in the piano music clips while auditory leads were typically required for the guitar music clips. The PSS values obtained for the speech stimuli varied substantially as a function of the particular speech sound presented. These results provide the first empirical evidence regarding people's sensitivity to audiovisual asynchrony for musical stimuli. Our results also demonstrate that people's sensitivity to asynchrony in speech stimuli is better than has been suggested on the basis of previous research using continuous speech streams as stimuli.
\n \n\n \n \nIn the present study, we explored the role of visual perceptual grouping on audiovisual motion integration, using an adaptation of the crossmodal dynamic capture task developed by Soto-Faraco et al. The principles of perceptual grouping were used to vary the perceived direction (horizontal vs vertical) and extent of apparent motion within the visual modality. When the critical visual stimuli, giving rise to horizontal local motion, were embedded within a larger array of lights, giving rise to the perception of global motion vertically, the influence of visual motion information on the perception of auditory apparent motion (moving horizontally) was reduced significantly. These results highlight the need to consider intramodal perceptual grouping when investigating crossmodal perceptual grouping.
\n \n\n \n \nWe investigated the extent to which intramodal visual perceptual grouping influences the multisensory integration (or grouping) of auditory and visual motion information. Participants discriminated the direction of motion of two sequentially presented sounds (moving leftward or rightward), while simultaneously trying to ignore a task-irrelevant visual apparent motion stream. The principles of perceptual grouping were used to vary the direction and extent of apparent motion within the irrelevant modality (vision). The results demonstrate that the multisensory integration of motion information can be modulated by the perceptual grouping taking place unimodally within vision, suggesting that unimodal perceptual grouping processes precede multisensory integration. The present study therefore illustrates how intramodal and crossmodal perceptual grouping processes interact to determine how the information in complex multisensory environments is parsed.
\n \n\n \n \nWe investigated the perception of synchrony for complex audiovisual events. In Experiment 1, a series of music (guitar and piano), speech (sentences), and object action video clips were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which stream (auditory or visual) appeared to have been presented first. Temporal discrimination accuracy was significantly better for the object actions than for the speech video clips, and both were significantly better than for the music video clips. In order to investigate whether or not these differences in TOJ performance were driven by differences in stimulus familiarity, we conducted a second experiment using brief speech (syllables), music (guitar), and object action video clips of fixed duration together with temporally reversed (i.e., less familiar) versions of the same stimuli. The results showed no main effect of stimulus type on temporal discrimination accuracy. Interestingly, however, reversing the video clips resulted in a significant decrement in temporal discrimination accuracy as compared to the normally presented for the music and object actions clips, but not for the speech stimuli. Overall, our results suggest that cross-modal temporal discrimination performance is better for audiovisual stimuli of lower complexity as compared to stimuli having continuously varying properties (e.g., syllables versus words and/or sentences).
\n \n\n \n \nThe superior colliculus generates and controls eye and head movements based on signals from different senses. The latest research on this structure enhances our understanding of the mechanisms of multisensory integration in the brain.
\n \n\n \n \nResearchers have known for more than a century that crossing the hands can impair both tactile perception and the execution of appropriate finger movements. Sighted people find it more difficult to judge the temporal order when two tactile stimuli, one applied to either hand, are presented and their hands are crossed over the midline as compared to when they adopt a more typical uncrossed-hands posture. It has been argued that because of the dominant role of vision in motor planning and execution, tactile stimuli are remapped into externally defined coordinates (predominantly determined by visual inputs) that takes longer to achieve when external and body-centered codes (determined primarily by somatosensory/proprioceptive inputs) are in conflict and that involves both multisensory parietal and visual cortex. Here, we show that the performance of late, but not of congenitally, blind people was impaired by crossing the hands. Moreover, we provide the first empirical evidence for superior temporal order judgments (TOJs) for tactile stimuli in the congenitally blind. These findings suggest a critical role of childhood vision in modulating the perception of touch that may arise from the emergence of specific crossmodal links during development.
\n \n\n \n \n