首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper)
【24h】

Towards Multimodal Sarcasm Detection (An Obviously Perfect Paper)

机译:朝多式讽刺讽刺检测(一个明显的完美纸)

获取原文

摘要

Sarcasm is often expressed through several verbal and non-verbal cues, e.g., a change of tone, overemphasis in a word, a drawn-out syllable, or a straight looking face. Most of the recent work in sarcasm detection has been carried out on textual data. In this paper, we argue that incorporating multimodal cues can improve the automatic classification of sarcasm. As a first step towards enabling the development of multimodal approaches for sarcasm detection, we propose a new sarcasm dataset, Multimodal Sarcasm Detection Dataset (MUS-tARD~1), compiled from popular TV shows. MUStARD consists of audiovisual utterances annotated with sarcasm labels. Each utterance is accompanied by its context of historical utterances in the dialogue, which provides additional information on the scenario where the utterance occurs. Our initial results show that the use of multimodal information can reduce the relative error rate of sarcasm detection by up to 12.9% in F-score when compared to the use of individual modalities. The full dataset is publicly available for use at https://github.com/soujanyaporia/MUStARD.
机译:讽刺通常通过几种口头和非言语提示表达,例如,语气的变化,过分传播,一个单词,一个绘制的音节或直视脸。最近的大多数讽刺检测工作已经在文本数据上进行。在本文中,我们认为包含多式联运线索可以改善讽刺的自动分类。作为启动讽刺检测的多模式方法的第一步,我们提出了一个新的讽刺数据集,多模式讽刺检测数据集(Mus-Tard〜1),从流行的电视节目编译。 Mustard由讽刺标签注释的视听话语组成。每个话语都伴随着对话中的历史话语的背景,这提供了关于话语发生的场景的额外信息。我们的初始结果表明,与使用单种式方式相比,使用多式联运信息的使用可以将讽刺检测的相对误差率降低到12.9%。完整的数据集公开可用于https://github.com/soujanyaporia/mustard。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号