Multisensory object-recognition processes were investigated by examining the combined influence of visual and auditory inputs upon object identification - in this case, pictures and vocalizations of animals. Behaviorally, subjects were significantly faster and more accurate at identifying targets when the picture and vocalization were matched (i.e. from the same animal), than when the target was represented in only one sensory modality. This behavioral enhancement was accompanied by a modulation of the evoked potential in the latency range and general topographic region of the visual evoked N1 component, which is associated with early feature processing in the ventral visual stream. High-density topographic mapping and dipole modeling of this multisensory effect were consistent with generators in lateral occipito-temporal cortices, suggesting that auditory inputs were modulating processing in regions of the lateral occipital cortices. Both the timing and scalp topography of this modulation suggests that there are multisensory effects during what is considered to be a relatively early stage of visual object-recognition processes, and that this modulation occurs in regions of the visual system that have traditionally been held to be unisensory processing areas. Multisensory inputs also modulated the visual 'selection-negativity', an attention dependent component of the evoked potential this is usually evoked when subjects selectively attend to a particular feature of a visual stimulus.
- Electrical mapping
- Object recognition
ASJC Scopus subject areas
- Cognitive Neuroscience
- Cellular and Molecular Neuroscience