...
首页> 外文期刊>Canadian acoustics >Perceiving Visible Speech Articulations In Virtual Reality
【24h】

Perceiving Visible Speech Articulations In Virtual Reality

机译:在虚拟现实中感知可见的语音关节

获取原文
获取原文并翻译 | 示例
           

摘要

Advances in virtual reality (VR) and avatar technologies have created new platforms for face-to-face communication in which visual speech information is presented through avatars using simulated articulatory movements. These movements are typically generated in real time by algorithmic response to acoustic parameters. While the communicative experience in VR has become increasingly realistic, the visual speech articulations remain intentionally imperfect and focused on synchrony to avoid uncanny valley effects [1]. While considerable previous research has demonstrated that listeners can incorporate visual speech information produced by computer-simulated faces with precise and pre-programmed articulations [2], it is unknown whether perceivers can make use of such underspecified and at times misleading simulated visual cues to speech.
机译:虚拟现实(VR)和Avatar技术的进步已经为面对面通信创建了新的平台,其中通过使用模拟的铰接运动通过化身通过化身来呈现视觉语音信息。 这些运动通常通过对声学参数的算法响应实时生成。 虽然VR中的交际经验变得越来越地,但可视语言铰接仍然是故意不完美的,并专注于同步措施,以避免不可思议的谷效应[1]。 虽然以前的前面的研究表明,听众可以包含通过精确和预编程的铰接的计算机模拟面部产生的视觉语音信息[2],但是尚不清楚感知是否可以利用这样的欠指定并且有时误导模拟的视觉提示语音 。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号