首页> 外文会议>Human System Interactions, 2009. HSI '09 >Real-time framework for multimodal human-robot interaction
【24h】

Real-time framework for multimodal human-robot interaction

机译:多模式人机交互的实时框架

获取原文

摘要

This paper presents a new framework for multimodal data processing in real-time. This framework comprises modules for different input and output signals and was designed for human-human or human-robot interaction scenarios. Single modules for the recording of selected channels like speech, gestures or mimics can be combined with different output options (i.e. robot reactions) in a highly flexible manner. Depending on the included modules, online as well as offline data processing is possible. This framework was used to analyze human-human interaction to gain insights on important factors and their dynamics. Recorded data comprises speech, facial expressions, gestures and physiological data. This naturally produced data was annotated and labeled in order to train recognition modules which will be integrated into the existing framework. The overall aim is to create a system that is able to recognize and react to those parameters that humans take into account during interaction. In this paper, the technical implementation and application in a human-human and a human-robot interaction scenario is presented.
机译:本文提出了一种实时的多模式数据处理新框架。该框架包含用于不同输入和输出信号的模块,并设计用于人与人或人与机器人的交互场景。可以以高度灵活的方式将用于记录选定通道(如语音,手势或模拟信号)的单个模块与不同的输出选项(即机器人反应)结合在一起。根据所包含的模块,可以进行联机和脱机数据处理。该框架用于分析人与人之间的互动,以获取有关重要因素及其动态的见解。记录的数据包括语音,面部表情,手势和生理数据。对这些自然产生的数据进行注释和标记,以便训练将集成到现有框架中的识别模块。总体目标是创建一个能够识别和响应人类在交互过程中考虑的参数的系统。本文介绍了在人与人和机器人交互场景中的技术实现和应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号