首页> 外文会议>International Workshop on Semantic Evaluation >Hitachi at SemEval-2020 Task 10: Emphasis Distribution Fusion on Fine-Tuned Language Models
【24h】

Hitachi at SemEval-2020 Task 10: Emphasis Distribution Fusion on Fine-Tuned Language Models

机译:日立Semeval-2020任务10:精细语言模型的重点分布融合

获取原文

摘要

This paper shows our system for SemEval-2020 task 10, Emphasis Selection for Written Text in Visual Media. Our strategy is two-fold. First, we propose fine-tuning many pre-trained language models, predicting an emphasis probability distribution over tokens. Then, we propose stacking a trainable distribution fusion (DISTFUSE) system to fuse the predictions of the fine-tuned models. Experimental results show that DISTFUSE is comparable or better when compared with a naive average ensemble. As a result, we were ranked 2nd amongst 31 teams.
机译:本文显示了Semeval-2020任务10的系统,重点选择了视觉媒体中的书面文本。 我们的策略是两倍。 首先,我们提出微调许多预训练的语言模型,预测了令牌的强调概率分布。 然后,我们建议堆叠培训分配融合(Distfuse)系统以融合微调模型的预测。 实验结果表明,与天真的平均集合相比,Distfuse是可比的或更好的。 结果,我们在31支球队中排名第二。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号