首页> 外文会议>International Conference on Automatic Face and Gesture Recognition >Face and Image Representation in Deep CNN Features
【24h】

Face and Image Representation in Deep CNN Features

机译:深层CNN特征的面部和图像表示

获取原文

摘要

Face recognition algorithms based on deep convolutional neural networks (DCNNs) have made progress on the task of recognizing faces in unconstrained viewing conditions. These networks operate with compact feature-based face representations derived from learning a very large number of face images. Although the learned feature sets produced by DCNNs can be highly robust to changes in viewpoint, illumination, and appearance, little is known about the nature of the face code that emerges at the top level of these networks. We analyzed the DCNN features produced by two recent face recognition algorithms. In the first set of experiments, we used the top-level features from the DCNNs as input into linear classifiers aimed at predicting metadata about the images. The results showed that the DCNN features contained surprisingly accurate information about the yaw and pitch of a face, and about whether the input face came from a still image or a video frame. In the second set of experiments, we measured the extent to which individual DCNN features operated in a view-dependent or view-invariant manner for different identities. We found that view-dependent coding was a characteristic of the identities rather than the DCNN featureswith some identities coded consistently in a view-dependent way and others in a view-independent way. In our third analysis, we visualized the DCNN feature space for 24,000+ images of 500 identities. Images in the center of the space were uniformly of low quality (e.g., extreme views, face occlusion, poor contrast, low resolution). Image quality increased monotonically as a function of distance from the origin. This result suggests that image quality information is available in the DCNN features, such that consistently average feature values reflect coding failures that reliably indicate poor or unusable images. Combined, the results offer insight into the coding mechanisms that support robust representation of faces in DCNNs.
机译:基于深度卷积神经网络(DCNNS)的面部识别算法已经取得了识别面的识别面观看条件的任务。这些网络与基于紧凑的特征的面部表示操作,从学习非常大量的面部图像衍生。尽管DCNN产生的学习特征集可以高度稳定地变为视点,照明和外观的变化,但是关于这些网络的顶层出现的面部代码的性质知之甚少。我们分析了两个最近的面部识别算法产生的DCNN特征。在第一组实验中,我们使用DCNNS的顶级特征作为输入到用于预测图像的元数据的线性分类器。结果表明,DCNN特征包含关于偏航和脸部的偏航和间距的令人惊讶的准确信息,以及关于输入面是否来自静止图像或视频帧。在第二组实验中,我们测量了以不同身份的视图依赖性或视图不变的方式操作的各个DCNN特征的程度。我们发现视图依赖性编码是标识的特征,而不是DCNN特征,其中一些身份以视图相关的方式始终依次编码,并且以视角为单独的方式。在我们的第三个分析中,我们可视化DCNN特征空间,有24,000多种身份图像。空间中心的图像均匀地具有低质量(例如,极端观点,面部遮挡,对比度不佳,低分辨率)。图像质量作为距离原点的距离的函数单调增加。该结果表明,图像质量信息可用于DCNN特征,使得始终如一的平均特征值反映可靠地指示差或不可用的图像的编码失败。结合,结果提供了深入了解DCNNS中面孔稳健表示的编码机制的洞察。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号