首页> 外文会议>International conference on applications of natural language to information systems >A Hierarchical Iterative Attention Model for Machine Comprehension
【24h】

A Hierarchical Iterative Attention Model for Machine Comprehension

机译:机器理解的层次迭代注意模型

获取原文

摘要

Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of Natural Language Processing, so reading comprehension of text is an important problem in NLP research. In this paper, we propose a novel Hierarchical Iterative Attention model (HIA), which constructs iterative alternating attention mechanism over tree-structured rather than sequential representations. The proposed HIA model continually refines its view of the query and document while aggregating the information required to answer a query, aiming to compute the attentions not only for the document but also the query side, which will benefit from the mutual information. Experimental results show that HIA has achieved significant state-of-the-art performance in public English datasets, such as CNN and Childrens Book Test datasets. Furthermore, HIA also outperforms state-of-the-art systems by a large margin in Chinese datasets, including People Daily and Childrens Fairy Tale datasets, which are recently released and the first Chinese reading comprehension datasets.
机译:使计算机能够理解文档以使其能够回答理解问题是自然语言处理的中心但尚未解决的目标,因此阅读文本理解是NLP研究中的重要问题。在本文中,我们提出了一种新颖的分层迭代注意模型(HIA),该模型在树结构表示而非顺序表示上构建了迭代交替注意机制。提出的HIA模型在汇总回答问题所需的信息的同时,不断完善其对查询和文档的视图,旨在不仅计算文档的关注度,还计算查询侧的关注度,这将受益于互信息。实验结果表明,HIA在公共英语数据集(例如CNN和Childrens Book Test数据集)中取得了显着的最新性能。此外,HIA在最新的中文数据集(包括最近发布的《人民日报》和《儿童童话》数据集以及首个中文阅读理解数据集)中也远远领先于最新的系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号