首页> 外文会议>International Conference on Artificial Intelligence and Education >Dual Flow Framework on Bimodality Emotion Recognition Based on Facial Expression and Eye Movement
【24h】

Dual Flow Framework on Bimodality Emotion Recognition Based on Facial Expression and Eye Movement

机译:基于表情和眼动的双模态情绪识别双流框架

获取原文

摘要

Emotion affects many aspects of our daily life. Lack of emotional communication in online learning results in poor learning effect and learning experience. This paper discusses the feasibility of recognizing emotions by using eye movements and facial expressions data together. A framework for identifying the learner’s emotional state based on expression and eye movement bimodality recognition is proposed based on the complementarity of eye movement and facial expression data. The video stream sequence is used to obtain the learner’s facial expression and eye movement information as dual-channel data, which is input into the data stream framework, and the machine learning method is used to predict the learner’s emotional state. Dual-channel data flow framework proposed in this paper can be used not only to discover learners’ emotional states in learning environments, but also can be applied to the area of identifying mental disorder and human-computer interaction.
机译:情感会影响我们日常生活的许多方面。在线学习中缺乏情感交流会导致学习效果和学习体验不佳。本文讨论了结合使用眼动和面部表情数据来识别情绪的可行性。基于眼动和面部表情数据的互补性,提出了一种基于表情和眼动双峰识别识别学习者情绪状态的框架。视频流序列用于获取学习者的面部表情和眼睛运动信息,并将其作为双通道数据输入到数据流框架中,而机器学习方法则用于预测学习者的情绪状态。本文提出的双通道数据流框架不仅可以用来发现学习环境中学习者的情绪状态,而且可以应用于识别精神障碍和人机交互领域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号