首页> 外文会议>Second annual meeting of the Society for Computation in Linguistics >Jabberwocky Parsing: Dependency Parsing with Lexical Noise
【24h】

Jabberwocky Parsing: Dependency Parsing with Lexical Noise

机译:Jabberwocky解析:依赖分析与词汇噪声

获取原文
获取原文并翻译 | 示例

摘要

Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll's poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform badly in the jabberwocky context. We also demonstrate that the technique of word dropout drastically improves parsing robustness in this setting, and also leads to significant improvements in out-of-domain parsing.
机译:解析模型长期以来一直受益于词汇信息的使用,而事实上,最新的依赖关系解析神经网络模型通过受益于词汇信息的分布式表示而获得了实质性的改进。同时,就像刘易斯·卡罗尔(Lewis Carroll)的诗《 Jabberwocky》一样,人类可以轻松地解析带有未知甚至新颖词的句子。在本文中,我们进行了jabberwocky解析实验,探索了最先进的神经网络解析器在缺少词汇信息的情况下的稳定性。我们发现,至少在通常的训练方案下,当前的解析模型实际上过度依赖词汇信息,并且在jabberwocky上下文中表现不佳。我们还证明,在这种情况下,单词丢弃技术可以显着提高解析的鲁棒性,并且还可以显着改善域外解析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号