首页> 外文会议>9th International conference on language resources and evaluation >Crowdsourcing for Evaluating Machine Translation Quality
【24h】

Crowdsourcing for Evaluating Machine Translation Quality

机译:众包评估机器翻译质量

获取原文

摘要

The recent popularity of machine translation has increased the demand for the evaluation of translations. However, the traditional evaluation approach, manual checking by a bilingual professional, is too expensive and too slow. In this study, we confirm the feasibility of crowdsourcing by analyzing the accuracy of crowdsourcing translation evaluations. We compare crowdsourcing scores to professional scores with regard to three metrics: translation-score, sentence-score, and system-score. A Chinese to English translation evaluation task was designed using around the NTCIR-9 PATENT parallel corpus with the goal being 5-range evaluations of adequacy and fluency. The experiment shows that the average score of crowdsource workers well matches professional evaluation results. The system-score comparison strongly indicates that crowdsourcing can be used to find the best translation system given the input of 10 source sentence.
机译:近期机器翻译的普及增加了对翻译评估的需求。但是,传统的评估方法,手动检查双语专业,太贵,太慢了。在这项研究中,我们通过分析众包翻译评估的准确性来确认众包的可行性。我们将众群成绩与三个度量标准进行比较专业分数:翻译 - 分数,句子分数和系统得分。汉语翻译评估任务是在NTCIR-9专利并行语料库周围设计的,目标是有5种评估的充足和流畅性。实验表明,人群工人的平均得分与专业的评估结果相匹配。系统分数比较强烈表示众包可用于找到给出10个源句的输入的最佳翻译系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号