首页> 外文期刊>Journal of visual communication & image representation >Discriminative two-level feature selection for realistic human action recognition
【24h】

Discriminative two-level feature selection for realistic human action recognition

机译:辨别性的双层特征选择,实现现实人体行动识别

获取原文
获取原文并翻译 | 示例
           

摘要

Constructing the bag-of-features model from Space-time interest points (STIPs) has been successfully utilized for human action recognition. However, how to eliminate a large number of irrelevant STIPs for representing a specific action in realistic scenarios as well as how to select discriminative codewords for effective bag-of-features model still need to be further investigated. In this paper, we propose to select more representative codewords based on our pruned interest points algorithm so as to reduce computational cost as well as improve recognition performance. By taking human perception into account, attention based saliency map is employed to choose salient interest points which fall into salient regions, since visual saliency can provide strong evidence for the location of acting subjects. After salient interest points are identified, each human action is represented with the bag-of-features model. In order to obtain more discriminative codewords, an unsupervised codeword selection algorithm is utilized. Finally, the Support Vector Machine (SVM) method is employed to perform human action recognition. Comprehensive experimental results on the widely used and challenging Hollywood-2 Human Action (HOHA-2) dataset and YouTube dataset demonstrate that our proposed method is computationally efficient while achieving improved performance in recognizing realistic human actions.
机译:从时空兴趣点(缩减)构建特色袋式模型已成功地用于人类行动识别。然而,如何消除大量不相关的句子,用于代表现实场景中的特定动作以及如何选择用于有效袋式模型的判别码字,仍然需要进一步研究。在本文中,我们建议根据我们修剪的兴趣点算法选择更多代表性的码字,以降低计算成本以及提高识别性能。通过考虑人类的感知,基于注意力的显着图来选择落入突出区域的突出兴趣点,因为视觉显着性可以为代理主题的位置提供强有力的证据。在识别出突出的兴趣点之后,每个人类行动用特征袋模型表示。为了获得更辨别的码字,利用无监督的码字选择算法。最后,采用支持向量机(SVM)方法来执行人类的动作识别。综合实验结果在广泛使用和挑战好莱坞-2人体行动(HOHA-2)数据集和YouTube数据集表明,我们的提出方法在计算上有效,同时实现了识别逼真人类行为的改进性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号