【24h】

Expectation-Regulated Neural Model for Event Mention Extraction

机译:事件提及提取的期望调节神经模型

获取原文

摘要

We tackle the task of extracting tweets that mention a specific event from all tweets that contain relevant keywords, for which the main challenges include unbalanced positive and negative cases, and the unavailability of manually labeled training data. Existing methods leverage a few manually given seed events and large unlabeled tweets to train a classifier, by using expectation regularization training with discrete ngram features. We propose a LSTM-based neural model that learns tweet-level features automatically. Compared with discrete ngram features, the neural model can potentially capture non-local dependencies and deep semantic information, which are more effective for disambiguating subtle semantic differences between true event mentions and false cases that use similar wording patterns. Results on both tweets and forum posts show that our neural model is more effective compared with a state-of-the-art discrete baseline.
机译:我们处理从包含相关关键字的所有tweet中提取提及特定事件的tweet的任务,其主要挑战包括不平衡的正面和负面案例,以及不可用手动标记的培训数据。现有方法通过使用具有离散ngram特征的期望正则化训练,利用一些手动给定的种子事件和大型未标记的推文来训练分类器。我们提出了一种基于LSTM的神经模型,该模型可以自动学习鸣叫级别的功能。与离散ngram特征相比,神经模型可以潜在地捕获非本地依赖关系和深层语义信息,这对于消除使用相似措辞模式的真实事件提及和错误案例之间的细微语义差异更有效。在推文和论坛帖子上的结果均表明,与最新的离散基线相比,我们的神经模型更为有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号