...
首页> 外文期刊>Proceedings of the National Academy of Sciences of the United States of America >Visual influence on path integration in darkness indicates a multimodal representation of large-scale space
【24h】

Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

机译:视觉对黑暗中路径整合的影响指示了大空间的多峰表示

获取原文
获取原文并翻译 | 示例
           

摘要

Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interocep-tive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multi-modal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation ortranslation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.
机译:我们返回黑暗中最近执行的路线的起点的能力被认为反映了与运动有关的信息的路径整合。在这里,我们提供了与运动有关的相互作用的表示(本体感受,前庭和运动感觉复制)与视觉表示相结合以形成单个多模式表示引导导航的证据。我们通过使用视觉投影的旋转或平移增益,使用沉浸式虚拟现实将视觉输入与与运动相关的感知耦合分离。首先,参与者走出了视觉和感知的输入的出路,然后回到黑暗中的起点,展示了视觉和感知信息在虚拟现实环境中的影响。接下来,参与者适应虚拟环境中的视觉旋转增益,然后完全在黑暗中执行路径整合任务。我们的发现是由一个定量模型准确预测的,该模型中视觉和感知的输入组合成一个单一的多模态表示导航,并且与单独的视觉和感知的对动作的影响的模型不兼容(在黑暗中,路径整合必须完全依赖于感知表示)。总体而言,我们的发现表明,组合的多模式表示可指导大规模导航,与视觉图像或认知图的作用一致。

著录项

  • 来源
  • 作者单位

    UCL Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, United Kingdom,Max Planck Institute for Biological Cybernetics, Tuebingen 72076, Germany;

    Max Planck Institute for Biological Cybernetics, Tuebingen 72076, Germany,Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea;

    UCL Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, United Kingdom,UCL Institute of Neurology, University College London, London WC1N 3BG, United Kingdom;

  • 收录信息 美国《科学引文索引》(SCI);美国《生物学医学文摘》(MEDLINE);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号