首页> 外文会议>Annual meeting of the Association for Computational Linguistics >A Surprisingly Robust Trick for the Winograd Schema Challenge
【24h】

A Surprisingly Robust Trick for the Winograd Schema Challenge

机译:Winograd Schema挑战的惊人技巧

获取原文

摘要

The Winograd Schema Challenge (Wsc) dataset Wsc273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on Wsc273 consistently and robustly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WscR). We additionally generate a large unsupervised Wsc-like dataset. By fine-tuning the BERT language model both on the introduced and on the WscR dataset, we achieve overall accuracies of 72.5% and 74.7% on Wsc273 and wnli, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more accurate on the 'complex' subsets of Wsc273, introduced by Trichelair et al. (2018).
机译:Winograd Schema Challenge(Wsc)数据集Wsc273及其推论对手WNLI是自然语言理解和常识推理的流行基准。在本文中,我们表明,在类似的代词消除歧义问题数据集(表示为WscR)上进行微调后,在Wsc273上三种语言模型的性能会持续稳定地提高。我们还生成了一个大型的无监督类Wsc数据集。通过在引入的WscR数据集和WscR数据集上微调BERT语言模型,我们在Wsc273和wnli上实现了72.5%和74.7%的总体准确度,从而将先前的最新解决方案分别提高了8.8%和9.6% , 分别。此外,我们的微调模型在Trichelair等人介绍的Wsc273的“复杂”子集上也始终更加准确。 (2018)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号