首页> 外文会议>Second workshop on neural machine translation 2018 >A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems
【24h】

A Shared Attention Mechanism for Interpretation of Neural Automatic Post-Editing Systems

机译:神经自动后编辑系统解释的共享注意机制

获取原文
获取原文并翻译 | 示例

摘要

Automatic post-editing (APE) systems aim to correct the systematic errors made by machine translators. In this paper, we propose a neural APE system that encodes the source (src) and machine translated (mt) sentences with two separate encoders, but leverages a shared attention mechanism to better understand how the two inputs contribute to the generation of the post-edited (pe) sentences. Our empirical observations have showed that when the mt is incorrect, the attention shifts weight toward tokens in the src sentence to properly edit the incorrect translation. The model has been trained and evaluated on the official data from the WMT16 and WMT17 APE IT domain English-German shared tasks. Additionally, we have used the extra 500K artificial data provided by the shared task. Our system has been able to reproduce the accuracies of systems trained with the same data, while at the same time providing better interpretability.
机译:自动后期编辑(APE)系统旨在纠正机器翻译人员所犯的系统错误。在本文中,我们提出了一种神经APE系统,该系统使用两个单独的编码器对源(src)和机器翻译(mt)句子进行编码,但利用共享的关注机制更好地了解这两个输入如何有助于生成后置编辑(体育)句子。我们的经验观察表明,当mt错误时,注意力会转移到src句子中的标记上,以正确编辑错误的翻译。该模型已根据来自WMT16和WMT17 APE IT领域英语-德语共享任务的官方数据进行了培训和评估。此外,我们还使用了共享任务提供的额外500K人工数据。我们的系统能够重现使用相同数据训练的系统的准确性,同时提供更好的解释性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号