首页> 外文会议>International Joint Conference on Neural Networks >Facial expression recognition using a pairwise feature selection and classification approach
【24h】

Facial expression recognition using a pairwise feature selection and classification approach

机译:使用成对特征选择和分类方法的面部表情识别

获取原文

摘要

This paper proposes a novel approach that combines specialized pairwise classifiers trained with different feature subsets for facial expression classification. The proposed approach first detects and extracts automatically faces from images. Next, the face is split into several regular zones and textural features are extracted from each zone to capture local information. The features extracted from all zones are concatenated to model the whole face. A pairwise approach that considers all pairs of classes and a hybrid feature selection strategy is used to both reduce the dimensionality and to select relevant features to discriminate between specific pairs of classes. Several pairwise classifiers are then trained with such pairwise feature subsets. At the end, given a new face image, all features are extracted from such a face, but only the previously selected subset of features is inputted to each pairwise classifier. The output of all pairwise classifiers is combined using a majority voting rule to decide on the facial expression. Experiments have been carried out on three publicly available datasets (JAFFE, CK and TFEID) and the correct classification rates of 99.05%, 98.07% and 99.63% were achieved respectively. Therefore, the pairwise approach is effective to discriminate between different facial expressions and the results achieved by the proposed approach are slightly better than several current approaches.
机译:本文提出了一种新颖的方法,该方法将经过专门训练的成对分类器与针对面部表情分类的不同特征子集一起训练。所提出的方法首先检测并从图像中自动提取面部。接下来,将面部分为几个规则区域,并从每个区域中提取纹理特征以捕获局部信息。从所有区域中提取的特征被连接起来以对整个面部建模。考虑所有类对和混合特征选择策略的成对方法既可以降低维数,又可以选择相关特征以区分特定的类对。然后使用这种成对特征子集训练几个成对分类器。最后,给定新的面部图像,从此类面部中提取所有特征,但仅先前选择的特征子集输入到每个成对分类器。使用多数表决规则组合所有成对分类器的输出,以决定面部表情。在三个公开可用的数据集(JAFFE,CK和TFEID)上进行了实验,正确分类率分别为99.05%,98.07%和99.63%。因此,成对方法可以有效地区分不同的面部表情,并且所提出的方法所获得的结果比几种当前方法要好一些。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号