首页> 外文会议>Conference on Empirical Methods in Natural Language Processing >Look at the First Sentence: Position Bias in Question Answering
【24h】

Look at the First Sentence: Position Bias in Question Answering

机译:看第一句:偏出问题的偏见

获取原文

摘要

Many extractive question answering models are trained to predict start and end positions of answers. The choice of predicting answers as positions is mainly due to its simplicity and effectiveness. In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e.g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions. We first illustrate this position bias in popular extractive QA models such as BiDAF and BERT and thoroughly examine how position bias propagates through each layer of BERT. To safely deliver position information without position bias, we train models with various de-biasing methods including entropy regularization and bias ensembling. Among them, we found that using the prior distribution of answer positions as a bias model is very effective at reducing position bias, recovering the performance of BERT from 37.48% to 81.64% when trained on a biased SQuAD dataset.
机译:许多采掘答疑模型训练以预测开始和答案的末尾位置。预测答案的位置的选择,主要是由于它的简单性和有效性。在这项研究中,我们假设当答案位置的分布是在训练集极不平衡(例如,答案就只有在第k个每个通道的句子),QA模型预测答案的位置可以学习寄生位置的线索和不能给不同位置的答案。我们首先通过BERT的每一层说明了流行的采掘QA车型如BiDAF和BERT这个位置偏差和彻底地研究如何定位偏差传播。为了安全地提供位置信息没有位置偏差,我们培养模式与各种反偏置方法,包括熵正规化和偏见ensembling。其中,我们发现,使用答案位置的先验分布的偏差模型是减少位置偏差,偏向某队训练的数据集时,从37.48%恢复BERT的性能,以81.64%非常有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号