首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Interpretable Question Answering on Knowledge Bases and Text
【24h】

Interpretable Question Answering on Knowledge Bases and Text

机译:基于知识库和文本的可解释性问答

获取原文

摘要

Interpretability of machine learning (ML) models becomes more relevant with their increasing adoption. In this work, we address the interpretability of ML based question answering (QA) models on a combination of knowledge bases (KB) and text documents. We adapt post hoc explanation methods such as LIME and input perturbation (IP) and compare them with the self-explanatory attention mechanism of the model. For this purpose, we propose an automatic evaluation paradigm for explanation methods in the context of QA. We also conduct a study with human annotators to evaluate whether explanations help them identify better QA models. Our results suggest that IP provides better explanations than LIME or attention, according to both automatic and human evaluation. We obtain the same ranking of methods in both experiments, which supports the validity of our automatic evaluation paradigm.
机译:机器学习(ML)模型的可解释性与越来越多的采用变得越来越相关。在这项工作中,我们基于知识库(KB)和文本文档的组合解决了基于ML的问答(QA)模型的可解释性。我们采用事后解释方法,例如LIME和输入扰动(IP),并将它们与模型的自解释注意机制进行比较。为此,我们提出了一种自动评估范式,用于质量保证体系中的解释方法。我们还与人工注释者一起进行了一项研究,以评估注释是否有助于他们识别更好的质量检查模型。根据自动评估和人工评估,我们的结果表明,与LIME或关注相比,IP提供了更好的解释。我们在两个实验中获得的方法排名相同,这支持了我们的自动评估范式的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号