首页> 外文会议>International Conference on Automatic Face and Gesture Recognition >Generative Video Face Reenactment by AUs and Gaze Regularization
【24h】

Generative Video Face Reenactment by AUs and Gaze Regularization

机译:通过Aus和凝视正常化的生成视频面部重新制定

获取原文

摘要

In this work, we propose an encoder-decoder-like architecture to perform face reenactment in image sequences. Our goal is to transfer the training subject identity to a given test subject. We regularize face reenactment by facial action unit intensity and 3D gaze vector regression. This way, we enforce the network to transfer subtle facial expressions and eye dynamics, providing a more lifelike result. The proposed encoder-decoder receives as input the previous sequence frame stacked to the current frame image of facial landmarks. Thus, the generated frames benefit from appearance and geometry, while keeping temporal coherence for the generated sequence. At test stage, a new target subject with the facial performance of the source subject and the appearance of the training subject is reenacted. Principal component analysis is applied to project the test subject geometry to the closest training subject geometry before reenactment. Evaluation of our proposal shows faster convergence, and more accurate and realistic results in comparison to other architectures without action units and gaze regularization.
机译:在这项工作中,我们提出了一种编码器 - 解码器类似的架构,用于在图像序列中执行面部重新创建。我们的目标是将培训主题身份转移到给定的测试主题。我们通过面部动作单位强度和3D凝视向量回归正常化面部再生。这样,我们强制执行网络以传递微妙的面部表情和眼睛动态,提供更逼真的结果。所提出的编码器 - 解码器接收到堆叠到面部地标的当前帧图像的先前序列帧。因此,所产生的帧受益于外观和几何形状,同时保持所生成的序列的时间相干性。在测试阶段,重新制定了源主体面部性能的新目标主体和培训主体的外观。主要成分分析用于将测试对象几何项目投影到重新制定之前的最近训练主题几何体。对我们提案的评估显示得更快的融合,与没有动作单位的其他架构和凝视正则化的其他架构相比,更准确和现实的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号