首页> 外文会议>2012 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference. >Using the visual Words based on Affine-SIFT descriptors for face recognition
【24h】

Using the visual Words based on Affine-SIFT descriptors for face recognition

机译:使用基于仿射SIFT描述符的视觉单词进行人脸识别

获取原文
获取原文并翻译 | 示例

摘要

Video-based face recognition has drawn a lot of attention in recent years. On the other hand, Bag-of-visual Words (BoWs) representation has been successfully applied in image retrieval and object recognition recently. In this paper, a video-based face recognition approach which uses visual words is proposed. In classic visual words, Scale Invariant Feature Transform (SIFT) descriptors of an image are firstly extracted on interest points detected by difference of Gaussian (DoG), then k-means-based visual vocabulary generation is applied to replace these descriptors with the indexes of the closet visual words. However, in facial images, SIFT descriptors are not good enough due to facial pose distortion, facial expression and lighting condition variation. In this paper, we use Affine-SIFT (ASIFT) descriptors as facial image representation. Experimental results on UCSD/Honda Video Database and VidTIMIT Video Database suggest that visual words based on Affine-SIFT descriptors can achieve lower error rates in face recognition task.
机译:基于视频的面部识别近年来引起了很多关注。另一方面,最近,视觉袋词(BoWs)表示已成功应用于图像检索和对象识别中。在本文中,提出了一种基于视频的使用视觉单词的人脸识别方法。用经典视觉词来说,首先在通过高斯差分(DoG)所检测到的兴趣点上提取图像的尺度不变特征变换(SIFT)描述符,然后应用基于k均值的视觉词汇生成将这些描述符替换为索引壁橱里的视觉词。然而,在面部图像中,由于面部姿势失真,面部表情和照明条件变化,SIFT描述符不够好。在本文中,我们使用仿射SIFT(ASIFT)描述符作为面部图像表示。在UCSD / Honda视频数据库和VidTIMIT视频数据库上的实验结果表明,基于Affine-SIFT描述符的视觉单词可以在面部识别任务中实现较低的错误率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号