首页> 外文会议>International Conference on BioMedical Engineering and Informatics >Mechanisms of visual-auditory temporal processing for artificial intelligence
【24h】

Mechanisms of visual-auditory temporal processing for artificial intelligence

机译:人工智能视觉听觉时间处理的机制

获取原文

摘要

In everyday life, our brains integrate various kinds of information from different modalities to perceive our complex environment. Temporal synchrony of audiovisual stimuli is required for audiovisual integration. Many studies have shown that temporal asynchrony of visual-auditory stimuli can influence the interaction between visual and auditory stimulus, however, the multisensory mechanisms of asynchrony inputs were not well understood. In present study, visual and auditory stimuli onset asynchrony (SOA= ±250 ms, ±200 ms, ±150 ms, ±100 ms, ±50 ms, 0 ms), only the auditory stimulus was attended. From the behavioral results, the responses to temporal asynchronous audiovisual stimuli were more accurate than unimodal auditory stimuli. The most significant enhancement was SOA = -100ms condition (the visual preceding), which reaction time was the fastest. These results revealed the basis of audiovisual interaction in which audiovisual stimuli presented with different SOA. The temporal alignment of visual-auditory stimuli can enhance the auditory detection. The study can offer basic theory for artificial intelligence.
机译:在日常生活中,我们的大脑将各种信息从不同的方式整合到识别我们的复杂环境。视听刺激的时间同步是视听融合所必需的。许多研究表明,视觉听觉刺激的颞asynchrony可以影响视觉和听觉刺激之间的相互作用,然而,异步输入的多思考机制并不熟知。在目前的研究中,视觉和听觉刺激爆发异步(SOA =±250ms,±200 ms,±150ms,±100ms,±50 ms,0 ms),只出席了听觉刺激。从行为结果来看,对时间异步视听刺激的反应比单峰听觉刺激更准确。最显着的增强是SOA = -100ms条件(视觉前面),反应时间是最快的。这些结果揭示了视听互动的基础,其中有不同SOA的视听刺激。视觉听觉刺激的时间对准可以增强听觉检测。该研究可以为人工智能提供基本理论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号