首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Exploring Pre-trained Language Models for Event Extraction and Generation
【24h】

Exploring Pre-trained Language Models for Event Extraction and Generation

机译:探索预培训的语言模型,以获取事件提取和生成

获取原文

摘要

Traditional approaches to the task of ACE event extraction usually depend on manually annotated data, which is often laborious to create and limited in size. Therefore, in addition to the difficulty of event extraction itself, insufficient training data hinders the learning process as well. To promote event extraction, we first propose an event extraction model to overcome the roles overlap problem by separating the argument prediction in terms of roles. Moreover, to address the problem of insufficient training data, we propose a method to automatically generate labeled data by editing prototypes and screen out generated samples by ranking the quality. Experiments on the ACE2005 dataset demonstrate that our extraction model can surpass most existing extraction methods. Besides, incorporating our generation method exhibits further significant improvement. It obtains new state-of-the-art results on the event extraction task, including pushing the F1 score of trigger classification to 81.1%, and the F1 score of argument classification to 58.9%.
机译:ACE事件提取任务的传统方法通常取决于手动注释的数据,这些数据通常很费力,以创造和限制大小。因此,除了事件提取本身的难度之外,培训数据不足也阻碍了学习过程。为了促进事件提取,我们首先提出了一个事件提取模型来克服角色通过在角色方面分离参数预测来克服角色重叠问题。此外,为了解决培训数据不足的问题,我们提出了一种通过编辑原型和通过排名质量来筛选产生的样本来自动生成标记数据的方法。 ACE2005数据集上的实验表明,我们的提取模型可以超越大多数现有的提取方法。此外,纳入我们的发电方法表现出进一步的显着改进。它在事件提取任务上获得新的最先进结果,包括将触发分类的F1分数推至81.1%,以及参数分类的F1分数到58.9%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号