首页> 外文会议>International Conference on Smart Computing and Electronic Enterprise >Deep Neural Classifiers For Eeg-Based Emotion Recognition In Immersive Environments
【24h】

Deep Neural Classifiers For Eeg-Based Emotion Recognition In Immersive Environments

机译:沉浸环境下基于脑的情绪识别的深度神经分类器

获取原文

摘要

Emotion recognition has become a major endeavor in artificial general intelligence applications in recent years. Although significant progress has been made in emotion recognition for music, image and video stimuli, it remains largely unexplored for immersive virtual stimuli. Our main objective for this line of investigation is to enable consistently reliable emotion recognition for virtual reality stimuli using only cheap, commercial-off-the-shelf electroencephalography (EEG) headsets which have significantly less recording channels and far lower signal resolution commonly called “Wearable EEG” as opposed to medical-grade EEG headsets with the ultimate goal of applying EEG-based emotion prediction to procedurally-generated affective content such as immersive computer games and virtual learning environments through machine learning. Our prior preliminary study has found that the use of a 4-channel, 256-Hz was indeed able to perform the required emotion recognition tasks from VR stimuli albeit at classification rates of between 65-89% classification accuracy only using Support Vector Machines (SVMs) and K-Nearest Neighbor (KNN) classifiers. For this particular study, we attempt to improve the classification rates to above 95% by conducting a comprehensive investigation into the use of various deep neural-based learning architectures for this domain. By tuning the deep neural classifiers in terms of the number of hidden layers, number of hidden nodes and the nodal dropout ratio, the emotion prediction accuracy was able to be improved to over 96%. This shows the continued promise of the application of wearable EEG for emotion prediction as a cost-effective and userfriendly approach for consistent and reliable prediction deployment in virtual reality-related content and environments through deep learning approaches.
机译:近年来,情感识别已成为人工智能应用中的一项主要工作。尽管在音乐,图像和视频刺激的情感识别方面已经取得了显着进展,但对于沉浸式虚拟刺激,在很大程度上尚待探索。我们对此研究的主要目标是,仅使用便宜的,现成的市售脑电图(EEG)头戴式受话器,就能够始终如一地可靠地对虚拟现实刺激进行情感识别,这种头戴式受话器的记录通道大大减少,信号分辨率也大大降低,通常称为“可戴式” “ EEG”与医疗级EEG耳机相反,其最终目标是将基于EEG的情绪预测应用于程序生成的情感内容,例如通过机器学习进行沉浸式计算机游戏和虚拟学习环境。我们之前的初步研究发现,仅使用支持向量机(SVM),使用4通道,256 Hz的确能够从VR刺激执行所需的情绪识别任务,尽管分类率在65-89%的分类精度之间)和K最近邻(KNN)分类器。对于此特定研究,我们尝试通过对该领域使用各种基于深度神经网络的学习体系结构进行全面调查,将分类率提高到95%以上。通过根据隐藏层的数量,隐藏节点的数量和节点的丢失率来调整深度神经分类器,可以将情绪预测的准确性提高到96%以上。这表明了可穿戴式EEG在情绪预测中的应用的持续前景,这是一种通过深度学习方法在虚拟现实相关内容和环境中进行一致且可靠的预测部署的经济高效且用户友好的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号