首页> 外文期刊>Machine Vision and Applications >Charting-based subspace learning for video-based human action classification
【24h】

Charting-based subspace learning for video-based human action classification

机译:基于图表的子空间学习,用于基于视频的人类动作分类

获取原文
获取原文并翻译 | 示例
           

摘要

We use charting, a non-linear dimensionality reduction algorithm, for articulated human motion classification in multi-view sequences or 3D data. Charting estimates automatically the intrinsic dimensionality of the latent sub-space and preserves local neighbourhood and global structure of high-dimensional data. We classify human actions sub-sequences of varying lengths of skeletal poses, adopting a multi-layered subspace classification scheme with layered pruning and search. The sub-sequences of varying lengths of skeletal poses can be extracted using either markerless articulated tracking algorithms or markerless motion capture systems. We present a qualitative and quantitative comparison of single-subspace and multiple-subspace classification algorithms. We also identify the minimum length of action skeletal poses, required for accurate classification, using competing classification systems as the baseline. We test our motion classification framework on HumanEva, CMU, HDM05 and ACCAD mocap datasets and achieve similar or better classification accuracy than various comparable systems.
机译:我们使用非线性降维算法制图,对多视图序列或3D数据中的关节运动进行清晰分类。制图可自动估计潜在子空间的固有维数,并保留高维数据的局部邻域和全局结构。我们对具有不同长度的骨骼姿势的人类动作子序列进行分类,采用具有分层修剪和搜索功能的多层子空间分类方案。可以使用无标记的关节跟踪算法或无标记的运动捕获系统来提取不同长度的骨骼姿势的子序列。我们提出了单子空间和多子空间分类算法的定性和定量比较。我们还使用竞争性分类系统作为基准,确定准确分类所需的动作骨骼姿势的最小长度。我们在HumanEva,CMU,HDM05和ACCAD mocap数据集上测试了运动分类框架,并且与各种可比较的系统相比,实现了相似或更好的分类精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号