首页> 外文会议>Workshop on the relevance of linguistic structure in neural architectutes for NLP 2018 >Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?
【24h】

Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL?

机译:语法有助于ELMo理解语义:语法在Deep SRL神经结构中是否仍然相关?

获取原文
获取原文并翻译 | 示例

摘要

Do unsupervised methods for learning rich, contextualized token representations obviate the need for explicit modeling of linguistic structure in neural network models for semantic role labeling (SRL)? We address this question by incorporating the massively successful ELMo em-beddings (Peters et al., 2018) into LISA (Strubell and McCallum, 2018), a strong, linguistically-informed neural network architecture for SRL. In experiments on the CoNLL-2005 shared task we find that though ELMo out-performs typical word embeddings, beginning to close the gap in F1 between LISA with predicted and gold syntactic parses, syntactically-informed models still out-perform syntax-free models when both use ELMo, especially on out-of-domain data. Our results suggest that linguistic structures are indeed still relevant in this golden age of deep learning for NLP.
机译:用于学习丰富的,上下文化的令牌表示的无监督方法是否消除了在用于语义角色标记(SRL)的神经网络模型中对语言结构进行显式建模的需要?我们通过将非常成功的ELMo嵌入物(Peters等人,2018)整合到LISA(Strubell和McCallum,2018)中来解决这个问题,LISA是一种针对语言的强大的语言信息神经网络架构。在CoNLL-2005共享任务的实验中,我们发现尽管ELMo胜过典型的词嵌入,并开始用预测句法和黄金句法分析来缩小LISA之间的F1差距,但在某些情况下,句法知情的模型仍胜过无语法模型两者都使用ELMo,尤其是在域外数据上。我们的研究结果表明,在NLP深度学习的黄金时代,语言结构的确仍然有用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号