首页> 外文会议>NAFOSTED Conference on Information and Computer Science >Comparing Machine Translation Accuracy of Attention Models
【24h】

Comparing Machine Translation Accuracy of Attention Models

机译:比较机器翻译精度的注意力模型

获取原文

摘要

Machine translation models using encoder and decoder architecture do not give accuracy as high as expectation. One reason for this ineffectiveness is due to lack of attention mechanism during training phase. Attention-based models overcome drawbacks of previous ones and obtain noteworthy improvement in terms of accuracy. In the paper, we experiment three attention models and evaluate their BLEU scores on small data sets. Bahdanau model achieves high accuracy, Transformer model obtains good accuracy while Luong model only gets acceptable accuracy.
机译:使用编码器和解码器架构的机器翻译模型不会高达最高的预期。这种无效的一个原因是由于训练阶段缺乏关注机制。基于注意的模型克服了之前的缺点,并在准确性方面获得了值得注意的改进。在本文中,我们试验三个关注模型,并在小数据集上评估他们的BLEU分数。 Bahdanau模型实现了高精度,变压器模型获得了良好的准确性,而Luong Model仅获得可接受的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号