【24h】

Neural Architectures for Named Entity Recognition

机译:用于命名实体识别的神经架构

获取原文

摘要

State-of-the-art named entity recognition systems rely heavily on hand-crafted features and domain-specific knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architectures-one based on bidirectional LSTMs and conditional random fields, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of information about words: character-based word representations learned from the supervised corpus and unsupervised word representations learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-specific knowledge or resources such as gazetteers.
机译:最先进的命名实体识别系统在很大程度上依赖手工制作的功能和特定领域的知识,以便从可用的受监督的小型培训资料库中有效学习。在本文中,我们介绍了两种新的神经体系结构,一种基于双向LSTM和条件随机字段,另一种基于移位减少解析器启发的基于过渡的方法来构造和标记片段。我们的模型依赖于两个有关单词的信息资源:从监督语料库学习的基于字符的单词表示形式和从无注释语料库学习的非监督单词表示形式。我们的模型无需使用任何特定于语言的知识或资源(例如,地名词典),就可以使用四种语言在NER中获得最新的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号