首页> 外文会议>Annual meeting of the Society for Computation in Linguistics >Jabberwocky Parsing: Dependency Parsing with Lexical Noise
【24h】

Jabberwocky Parsing: Dependency Parsing with Lexical Noise

机译:Jabberwocky解析:与词汇噪声解析的依赖关系

获取原文

摘要

Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll's poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in out-of-domain parsing.
机译:解析模型长期利益使用词汇信息,并且实际上,目前最先进的神经网络模型,用于解析的依赖性解析通过受益于词汇信息的分布式表示来实现大量的改进。与此同时,人类可以轻松地用未知甚至新颖的单词解析句子,如刘易斯卡罗尔的诗歌jabberwocky。在本文中,我们进行了Jabberwocky解析实验,探索了最先进的神经网络解析器对没有词汇信息的稳健性。我们发现当前解析模型,至少在通常的培训方案下,实际上依赖于词汇信息,并在Jabberwocky背景下表现得很糟糕。我们还证明,在此设置中,辍学的技术大大提高了解析鲁棒性,并且还导致域外解析的显着改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号