首页> 外文期刊>Seeing and perceiving >Spatial Shifts of Audio-Visual Interactions by Perceptual Learning are Specific to the Trained Orientation and Eye
【24h】

Spatial Shifts of Audio-Visual Interactions by Perceptual Learning are Specific to the Trained Orientation and Eye

机译:通过感知学习进行视听交互的空间变化特定于训练的方向和眼睛

获取原文
获取原文并翻译 | 示例
           

摘要

A large proportion of the human cortex is devoted to visual processing. Contrary to the traditional belief that multimodal integration takes place in multi modal processing areas separate from visual cortex, several studies have found that sounds may directly alter processing in visual brain areas. Furthermore, recent findings show that perceptual learning can change the perceptual mechanisms that relate auditory and visual senses. However, there is still a debate about the systems involved in cross-modal learning. Here, we investigated the specificity of audio-visual perceptual learning. Audio-visual cuing effects were tested on a Gabor orientation task and an object discrimination task in the presence of lateralised sound cues before and after eight-days of cross-modal task-irrelevant perceptual learning. During training, the sound cues were paired with visual stimuli that were misaligned at a proximal (trained) visual field location relative to the sound. Training was performed with one eye patched and wit only one Gabor orientation. Consistent with previous findings we found that cross-modal perceptual training shifted the audio-visual cueing effect towards the trained retinotopic location. However, this shift in audio-visual tuning was only observed for the trained stimulus (Gabors), at the trained orientation, and in the trained eye. This specificity suggests that multimodal interactions resulting from cross-modal (audio-visual) task-irrelevant perceptual learning involves so-called unisensory visual processing areas in humans. Our findings provide further support for recent anatomical and physiological findings that suggest relatively early interactions in cross-modal processing.
机译:人体皮质的很大一部分致力于视觉处理。与传统的认为多模式整合发生在与视觉皮层分开的多模式处理区域中的传统观点相反,一些研究发现声音可能直接改变视觉脑区域中的处理。此外,最近的发现表明,知觉学习可以改变与听觉和视觉相关的知觉机制。但是,关于跨模式学习中涉及的系统仍存在争议。在这里,我们研究了视听知觉学习的特殊性。在八天与任务无关的跨模式知觉学习之前和之后,在存在侧向声音提示的情况下,对Gabor定向任务和对象区分任务测试了视听提示效果。在训练期间,声音提示与视觉刺激配对,这些视觉刺激在相对于声音的近端(训练)视野位置未对准。训练时用一只眼睛打补丁,并且仅以一种Gabor方向进行。与以前的发现一致,我们发现跨模式的知觉训练将视听提示效果移向了受过训练的视网膜位置。但是,仅在经过训练的刺激(Gabors),在经过训练的方位以及在经过训练的眼睛中才能观察到视听调谐的这种变化。这种特殊性表明,与跨模态(视听)任务无关的感知学习所导致的多模态交互涉及人类中所谓的单感觉视觉处理区域。我们的发现为最近的解剖学和生理学发现提供了进一步的支持,这些发现暗示了跨模式加工中相对较早的相互作用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号