首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Improving the Robustness of Question Answering Systems to Question Paraphrasing
【24h】

Improving the Robustness of Question Answering Systems to Question Paraphrasing

机译:提高问题回答系统的鲁棒性解决问题释义

获取原文

摘要

Despite the advancement of question answering (QA) systems and rapid improvements on held-out test sets, their generalizability is a topic of concern. We explore the robustness of QA models to question paraphrasing by creating two test sets consisting of paraphrased SQuAD questions. Paraphrased questions from the first test set are very similar to the original questions designed to test QA models' over-sensitivity, while questions from the second test set are paraphrased using context words near an incorrect answer candidate in an attempt to confuse QA models. We show that both paraphrased test sets lead to significant decrease in performance on multiple state-of-the-art QA models. Using a neural paraphrasing model trained to generate multiple paraphrased questions for a given source question and a set of paraphrase suggestions, we propose a data augmentation approach that requires no human intervention to re-train the models for improved robustness to question paraphrasing.
机译:尽管问题回答(QA)系统的进步和对拉出测试集的快速改进,但它们的普遍性是一个关注的主题。我们探讨了QA模型通过创建一个由涉及剖析的小队问题的测试集来解决令人作用的稳健性。来自第一个测试集的解释问题与旨在测试QA模型的过度灵敏度的原始问题非常相似,而第二个测试集的问题是使用不正确的答案候选者附近的上下文词来解释,以试图混淆QA模型。我们表明,两种解剖测试都会导致多种最先进的QA模型的性能显着降低。使用培训的神经释义模型,为给定的源问题和一组释义建议产生多种释义问题,我们提出了一种数据增强方法,不需要人为干预来重新培训模型,以改善鲁布斯对问题释义的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号