首页> 外文学位 >Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues.
【24h】

Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues.

机译:面向面向协作任务的多模式人与人对话建模。

获取原文
获取原文并翻译 | 示例

摘要

This research took place in the larger context of building effective multimodal interfaces to help elderly people live independently. The final goal was to build a dialogue manager which could be deployed on a robot. The robot would help elderly people perform Activities of Daily Living (ADLs), such as cooking dinner, and setting a table. In particular, I focused on building dialogue processing modules to understand such multimodal dialogues. Specifically, I investigated the functions of gestures (e.g. Pointing Gestures, and Haptic-Ostensive actions which involve force exchange) in dialogues concerning collaborative tasks in ADLs.;This research employed an empirical approach. The machine learning based modules were built using collected human experiment data. The ELDERLY-AT-HOME corpus was built based on a data collection of human-human collaborative interactions in the elderly care domain. Multiple categories of annotations were further conducted to build the Find corpus, which only contained the experiment episodes where two subjects were collaboratively searching for objects (e.g. a pot, a spoon, etc.), which are essential tasks to perform ADLs.;This research developed three main modules: coreference resolution, Dialogue Act classification, and task state inference. The coreference resolution experiments showed that modalities other than language play an important role in bringing antecedents into the dialogue context. The Dialogue Act classification experiments showed that multimodal features including gestures, Haptic-Ostensive actions, and subject location significantly improve accuracy. They also showed that dialogue games help improve performance, even if the dialogue games were inferred dynamically. A heuristic rule-based task state inference system using the results of Dialogue Act classification and coreference resolution was designed and evaluated; the experiments showed reasonably good results.;Compared to previous work, the contributions of this research are as follows: 1) Built a multimodal corpus focusing on human-human collaborative task-oriented dialogues. 2) Investigated coreference resolution from language to objects in the real world. 3) Experimented with Dialogue Act classification using utterances, gestures and Haptic-Ostensive actions. 4) Implemented and evaluated a task state inference system.
机译:这项研究是在构建有效的多模式界面以帮助老年人独立生活的更大范围内进行的。最终目标是建立一个可以部署在机器人上的对话管理器。该机器人将帮助老年人进行日常生活活动(ADL),例如做饭,摆桌子。特别是,我专注于构建对话处理模块以了解这种多模式对话。具体来说,我在与ADL中的协作任务有关的对话中研究了手势的功能(例如,指向手势和涉及力交换的触觉-手势动作).;本研究采用了一种经验方法。基于机器学习的模块是使用收集的人体实验数据构建的。 ELDERLY-AT-HOME语料库是基于老年人护理领域中人与人之间的互动互动的数据收集而建立的。进一步进行了多类注释,以构建“查找”语料库,其中仅包含两个实验对象共同搜索对象(例如锅,勺子等)的实验情节,这是执行ADL的基本任务。开发了三个主要模块:共指解析,对话法分类和任务状态推断。共指解析实验表明,语言以外的其他方式在使先例进入对话环境中也起着重要作用。对话法分类实验表明,包括手势,触觉动作和主体位置在内的多模式功能可显着提高准确性。他们还表明,即使动态推断出对话游戏,对话游戏也有助于提高性能。设计并评估了一个基于启发式规则的任务状态推断系统,该系统使用了《对话法》分类和共指解决的结果;与以前的工作相比,本研究的贡献如下:1)建立了以人与人协作,任务导向的对话为重点的多模式语料库。 2)研究了从语言到现实世界中对象的共指解析。 3)使用话语,手势和触觉动作对“对话法”进行了分类。 4)实施并评估了任务状态推断系统。

著录项

  • 作者

    Chen, Lin.;

  • 作者单位

    University of Illinois at Chicago.;

  • 授予单位 University of Illinois at Chicago.;
  • 学科 Computer science.;Artificial intelligence.
  • 学位 Ph.D.
  • 年度 2014
  • 页码 134 p.
  • 总页数 134
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 遥感技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号