首页> 外文期刊>Seeing and perceiving >Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions
【24h】

Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

机译:视听语音整合理论的一些行为和神经生物学约束:回顾和新方向的建议

获取原文
获取原文并翻译 | 示例
           

摘要

Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processingmechani sins involved in audiovisual speech integration.
机译:Summerfield(1987)提出了几种视听语音感知的方法,这是近年来发展迅速的一个研究领域。拟议的帐目包括离散语音特征的集成,描述独立声学和光学参数值的向量,声道的过滤功能以及声道的发音动力学。后两个假设假设视听语音感知的表示基于抽象手势,而前两个假设假设该表示由从视觉和听觉模态获得的符号或特征信息组成。最近来自不同学科的越来越多的证据表明,应该扩展Summerfield基于特征的理论的总体框架。提出了基于特征理论的更新框架。我们提出一种处理模型,认为在正确定时输入时,听觉和视觉脑电路会提供辅助信息,并且听觉和视觉语音表示不一定在信息处理过程中转换为通用代码。语音感知中的多感官处理的未来研究应调查听觉和视觉大脑区域之间的联系,并利用动态建模工具进一步了解参与视听语音集成的时间和信息处理机制。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号