...
首页> 外文期刊>Computer speech and language >The roles and recognition of Haptic-Ostensive actions in collaborative multimodal human-human dialogues
【24h】

The roles and recognition of Haptic-Ostensive actions in collaborative multimodal human-human dialogues

机译:触觉紧张行为在多人协作式多人对话中的作用和认可

获取原文
获取原文并翻译 | 示例
           

摘要

The RoboHelper project has the goal of developing assistive robots for the elderly. One crucial component of such a robot is a multimodal dialogue architecture, since collaborative task-oriented human-human dialogue is inherently multimodal. In this paper, we focus on a specific type of interaction, Haptic-Ostensive (H-O) actions, that are pervasive in collaborative dialogue. H-O actions manipulate objects, but they also often perform a referring function. We collected 20 collaborative task-oriented human-human dialogues between a helper and an elderly person in a realistic setting. To collect the haptic signals, we developed an unobtrusive sensory glove with pressure sensors. Multiple annotations were then conducted to build the Find corpus. Supervised machine learning was applied to these annotations in order to develop reference resolution and dialogue act classification modules. Both corpus analysis, and these two modules show that H-O actions play a crucial role in interaction: models that include H-O actions, and other extra-linguistic information such as pointing gestures, perform better. For true human-robot interaction, all communicative intentions must of course be recognized in real time, not on the basis of annotated categories. To demonstrate that our corpus analysis is not an end in itself, but can inform actual human-robot interaction, the last part of our paper presents additional experiments on recognizing H-O actions from the haptic signals measured through the sensory glove. We show that even though pressure sensors are relatively imprecise and the data provided by the glove is noisy, the classification algorithms can successfully identify actions of interest within subjects.
机译:RoboHelper项目的目标是开发针对老年人的辅助机器人。这种机器人的一个重要组成部分是多模式对话体系结构,因为面向任务的协作人与人之间的对话本质上是多模式的。在本文中,我们专注于一种特殊类型的交互,即触觉-听觉(H-O)动作,这种动作普遍存在于协作对话中。 H-O动作可操纵对象,但它们也经常执行引用功能。在现实的环境中,我们收集了20位助手和老人之间面向任务的协作人与人对话。为了收集触觉信号,我们开发了带有压力传感器的不显眼的感觉手套。然后进行了多个注释以构建“查找”语料库。有监督的机器学习应用于这些注释,以开发参考分辨率和对话行为分类模块。语料库分析和这两个模块都表明,H-O动作在交互中起着至关重要的作用:包括H-O动作和其他语言信息(如指向手势)的模型的性能更好。对于真正的人机交互,当然必须实时识别所有的交流意图,而不是基于带注释的类别。为了证明我们的语料库分析本身并不是目的,而是可以告知实际的人机交互,本文的最后一部分提出了另外的实验,用于从通过感觉手套测量的触觉信号中识别H-O动作。我们显示,即使压力传感器相对不精确,并且手套提供的数据也很杂乱,分类算法仍可以成功识别对象内感兴趣的动作。

著录项

  • 来源
    《Computer speech and language》 |2015年第1期|201-231|共31页
  • 作者单位

    Natural Language Processing Lab, Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States;

    Robotics Lab, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL, United States;

    Natural Language Processing Lab, Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States;

    Robotics Lab, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, IL, United States;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Haptic-Ostensive actions; Multimodal dialogues; Reference resolution; Dialogue act classification;

    机译:触觉紧张的动作;多式联运对话;参考分辨率;对话行为分类;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号