首页> 外文期刊>Procedia Computer Science >Combining Multi-task Learning with Transfer Learning for Biomedical Named Entity Recognition
【24h】

Combining Multi-task Learning with Transfer Learning for Biomedical Named Entity Recognition

机译:将多任务学习与生物医学命名实体识别的转移学习相结合

获取原文
       

摘要

Multi-task learning approaches have shown significant improvements in different fields by training different related tasks simultaneously. The multi-task model learns common features among different tasks where they share some layers. However, it is observed that the multi-task learning approach can suffer performance degradation with respect to single task learning in some of the natural language processing tasks, specifically in sequence labelling problems. To tackle this limitation we formulate a simple but effective approach that combines multi-task learning with transfer learning. We use a simple model that comprises of bidirectional long-short term memory and conditional random field. With this simple model, we are able to achieve better F1-score compared to our single task and the multi-task models as well as state-of-the-art multi-task models.
机译:多任务学习方法通​​过同时培训不同的相关任务,在不同的领域中显示了显着的改进。多任务模型在分享某些层的不同任务中了解共同的功能。然而,观察到,在一些自然语言处理任务中,多任务学习方法可以对单一任务学习进行性能下降,具体在于序列标记问题。为了解决这个限制,我们制定了一种简单但有效的方法,将多任务学习与转移学习结合起来。我们使用一个简单的模型,包括双向长期内存和条件随机字段。通过这种简单的模型,与我们的单一任务和多任务模型以及最先进的多任务模型相比,我们能够实现更好的F1分数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号