首页> 外文会议>BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP >Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?
【24h】

Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?

机译:潜在的树立学习用有序神经元:它产生的解析是什么?

获取原文

摘要

Recent latent tree learning models can learn constituency parsing without any exposure to human-annotated tree structures. One such model is ON-LSTM (Shen et al., 2019), which is trained on language modelling and has near-state-of-the-art performance on unsupervised parsing. In order to better understand the performance and consistency of the model as well as how the parses it generates are different from gold-standard PTB parses, we replicate the model with different restarts and examine their parses. We find that (1) the model has reasonably consistent parsing behaviors across different restarts, (2) the model struggles with the internal structures of complex noun phrases, (3) the model has a tendency to overestimate the height of the split points right before verbs. We speculate that both problems could potentially be solved by adopting a different training task other than unidirectional language modelling.
机译:最近的潜在树学习模型可以学习选区解析,而不会对人类注释的树结构接触。一个这样的模型是LSTM(Shen等,2019),在语言建模上培训,并在无人监督的解析上具有近最先义的性能。为了更好地理解模型的性能和一致性以及它生成的解析如何与金标准的PTB解析不同,我们用不同的重新启动复制模型并检查其解析。我们发现(1)该模型在不同的重启方面具有合理的解析行为,(2)模型与复杂名词短语的内部结构斗争,(3)模型在以前的趋势倾向于高估分裂点的高度动词。我们推测,通过采用除单向语言建模以外的不同培训任务,可以解决这两个问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号