首页> 外文会议>International Conference on Automatic Face and Gesture Recognition >Pose-independent Facial Action Unit Intensity Regression Based on Multi-task Deep Transfer Learning
【24h】

Pose-independent Facial Action Unit Intensity Regression Based on Multi-task Deep Transfer Learning

机译:基于多任务深度转移学习的姿势独立面部动作单位强度回归

获取原文

摘要

Facial expression recognition plays an increasingly important role in human behavior analysis and human computer interaction. Facial action units (AUs) coded by the Facial Action Coding System (FACS) provide rich cues for the interpretation of facial expressions. Much past work on AU analysis used only frontal view images, but natural images contain a much wider variety of poses. The FG 2017 Facial Expression Recognition and Analysis challenge (FERA 2017) requires participants to estimate the AU occurrence and intensity under nine different pose angles. This paper proposes a multi-task deep network addressing the AU intensity estimation sub-challenge of FERA 2017. The network performs the tasks of pose estimation and pose-dependent AU intensity estimation simultaneously. It merges the pose-dependent AU intensity estimates into a single estimate using the estimated pose. The two tasks share transferred bottom layers of a deep convolutional neural network (CNN) pre-trained on ImageNet. Our model outperforms the baseline results, and achieves a balanced performance among nine pose angles for most AUs.
机译:面部表情识别在人类行为分析和人机互动中起着越来越重要的作用。面部动作编码系统(FACS)编码的面部动作单位(AUS)为面部表情的解释提供了丰富的线索。在AU分析上使用了多远的工作仅使用了正面视图图像,但自然图像包含更广泛的姿势。 FG 2017面部表情识别和分析挑战(FERA 2017)要求参与者估计九个不同姿势角度下的AU发生和强度。本文提出了一种多任务深度网络,解决FERA 2017的AU强度估计子挑战。网络同时执行姿势估计和姿势依赖性AU强度估计的任务。它将构成依赖的AU强度估计与估计的姿势合并成单个估计。这两个任务份额在想象中预先培训的深卷积神经网络(CNN)的转移底层。我们的模型优于基线结果,为大多数AUS实现九个姿势角度的平衡性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号