首页> 外文学位 >Analyse de mouvements faciaux a partir d'images video.
【24h】

Analyse de mouvements faciaux a partir d'images video.

机译:从视频图像分析面部运动。

获取原文
获取原文并翻译 | 示例

摘要

In a face--to--face talk, language is supported by nonverbal communication, which plays a central role in human social behavior by adding cues to the meaning of speech, providing feedback, and managing synchronization. Information about the emotional state of a person is usually carried out by facial attributes. In fact, 55% of a message is communicated by facial expressions whereas only 7% is due to linguistic language and 38% to paralanguage. However, there are currently no established instruments to measure such behavior.;The computer vision community is therefore interested in the development of automated techniques for prototypic facial expression analysis, for human computer interaction applications, meeting video analysis, security and clinical applications.;For gathering observable cues, we try to design, in this research, a framework that can build a relatively comprehensive source of visual information, which will be able to distinguish the facial deformations, thus allowing to point out the presence or absence of a particular facial action.;A detailed review of identified techniques led us to explore two different approaches.;The first approach involves appearance modeling, in which we use the gradient orientations to generate a dense representation of facial attributes. Besides the facial representation problem, the main difficulty of a system, which is intended to be general, is the implementation of a generic model independent of individual identity, face geometry and size. We therefore introduce a concept of prototypic referential mapping through a SIFT--flow registration that demonstrates, in this thesis, its superiority to the conventional eyes--based alignment.;In a second approach, we use a geometric model through which the facial primitives are represented by Gabor filtering. Motivated by the fact that facial expressions are not only ambiguous and inconsistent across human but also dependent on the behavioral context; in this approach, we present a personalized facial expression recognition system whose overall performance is directly related to the localization performance of a set of facial fiducial points. These points are tracked through a sequence of video frames by a modification of a fast Gabor phase--based disparity estimation technique. In this thesis, we revisit the confidence measure, and introduce an iterative conditional procedure for displacement estimation that improve the robustness of the original methods.;Keywords: Computer vision, image processing, facial expression recognition, emotion analysis, face analysis, feature representation, Gabor filtering, registration, tracking.
机译:在面对面交谈中,语言得到非语言交流的支持,非语言交流通过在语音含义中添加线索,提供反馈并管理同步来在人类社会行为中发挥核心作用。有关一个人的情绪状态的信息通常是通过面部属性进行的。实际上,有55%的信息是通过面部表情传达的,而只有7%的信息是由语言表达的,而有38%的信息是由副语言表达的。但是,目前尚无用于测量此类行为的成熟工具。计算机视觉界因此对开发原型面部表情分析,人机交互应用,会议视频分析,安全性和临床应用的自动化技术感兴趣。收集可观察到的线索,我们尝试在此研究中设计一个框架,该框架可以构建相对全面的视觉信息源,从而能够区分面部变形,从而指出是否存在特定的面部动作对已识别技术的详细介绍使我们探索了两种不同的方法。第一种方法涉及外观建模,其中我们使用渐变方向生成面部特征的密集表示。除面部表示问题外,系统的主要困难(一般而言)是实现独立于个人身份,面部几何形状和大小的通用模型。因此,我们通过SIFT流注册引入了原型参照映射的概念,在本文中证明了它优于传统的基于眼睛的对齐方式。用Gabor滤波表示。原因是面部表情不仅在整个人类中都是模棱两可和不一致的,而且还取决于行为背景;通过这种方法,我们提出了一种个性化的面部表情识别系统,其总体性能与一组面部基准点的定位性能直接相关。通过基于快速Gabor相位的视差估计技术的修改,可以在一系列视频帧中跟踪这些点。在本文中,我们重新审视了置信度度量,并提出了一种迭代的条件估计位移条件程序,以提高原始方法的鲁棒性。关键词:计算机视觉,图像处理,面部表情识别,情感分析,面部分析,特征表示, Gabor过滤,注册,跟踪。

著录项

  • 作者

    Dahmane, Mohamed.;

  • 作者单位

    Universite de Montreal (Canada).;

  • 授予单位 Universite de Montreal (Canada).;
  • 学科 Computer Science.
  • 学位 Ph.D.
  • 年度 2011
  • 页码 237 p.
  • 总页数 237
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 肿瘤学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号