首页> 外文会议>International Symposium on Robot and Human Interactive Communication >Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction
【24h】

Evaluations of embedded Modules dedicated to multimodal Human-Robot Interaction

机译:嵌入式模块专用于多模式人体机器人交互的评估

获取原文

摘要

With personal robotics and assistance to dependant people, robots are in continuous interface with humans. To enable a more natural communication, based on speech and gesture, robots must be endowed with auditive and visual perception capacities. This paper describes a modular multimodal interface based on speech and gestures in order to control an interactive robot called Jido. In this paper we describe how the speech recognition and understanding module processes deictic and anaphoric utterances in a robust way and how gesture is taken into account at the fusion level to complement speech utterances. These two other modules, namely gesture interpretation and probabilistic multimodal fusion are then briefly depicted. Their integration and the associated robotics experiments are finally reported. These experiments were carried out in the context of a multimodal interactive manipulation task involving successive local motion and handling commands. Our object exchange scenario was successfully run several times with different users.
机译:通过个人机器人和扶正人员的帮助,机器人与人类连续接口。为了实现更自然的通信,基于语音和手势,必须赋予高声道和视觉感知能力的机器人。本文介绍了基于语音和手势的模块化多模式界面,以控制名为JIDO的交互式机器人。在本文中,我们描述了语音识别和理解模块如何以强大的方式处理示例和化幻话语,以及如何在融合级别考虑姿态以补充语音话语。然后简要描绘了这两个其他模块,即手势解释和概率多模式融合。他们终于报道了它们的整合和相关的机器人实验。在涉及连续局部运动和处理命令的多模式交互式操作任务的背景下进行了这些实验。我们的对象Exchange方案与不同的用户成功运行了多次。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号