首页> 外文会议>Advances in Multimedia Information Processing - PCM 2008 >Toward Multi-modal Music Emotion Classification
【24h】

Toward Multi-modal Music Emotion Classification

机译:走向多模式音乐情感分类

获取原文
获取原文并翻译 | 示例

摘要

The performance of categorical music emotion classification that divides emotion into classes and uses audio features alone for emotion classification has reached a limit due to the presence of a semantic gap between the object feature level and the human cognitive level of emotion perception. Motivated by the fact that lyrics carry rich semantic information of a song, we propose a multi-modal approach to help improve categorical music emotion classification. By exploiting both the audio features and the lyrics of a song, the proposed approach improves the 4-class emotion classification accuracy from 46.6% to 57.1%. The results also show that the incorporation of lyrics significantly enhances the classification accuracy of valence.
机译:由于情感特征在对象特征水平和人类认知水平之间存在语义鸿沟,因此将情感分为类别并仅将音频特征用于情感分类的分类音乐情感分类的性能已达到极限。受歌词包含丰富的歌曲语义信息的事实启发,我们提出了一种多模式方法来帮助改善分类音乐情感分类。通过同时利用歌曲的音频特征和歌词,该方法将4级情感分类的准确性从46.6%提高到57.1%。结果还表明,歌词的结合显着提高了价分类的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号