首页> 美国政府科技报告 >Modeling Spatiotemporal Contextual Dynamics with Sparse-Coded Transfer Learning.
【24h】

Modeling Spatiotemporal Contextual Dynamics with Sparse-Coded Transfer Learning.

机译:用稀疏编码传递学习建模时空语境动力学。

获取原文

摘要

An important problem of visual understanding is how to recognize and predict human actions or imminent events from video. The ultimate intelligent systems should be able to detect/track suspicious subjects, predict actions and events, and raise alarms for emergencies before happening. From this STIR project, we have created a new algorithmic tool set of modeling spatiotemporal contextual dynamics. For low-level and middle-level visual representation, we proposed a class of Schatten norm based discriminative metrics, locality- constrained low-rank coding, discriminative analysis by multiple principal angles, and clustering based fast low-rank approximation for large scale analysis. We also proposed decomposed contour prior and a stub feature based level set method for shape recognition in images and videos. For high-level understanding and inference, we proposed the ARMA-HMM model for early recognition of human activity and the complex temporal composition model of actionlets for activity prediction. Effectiveness and efficiency have been extensively tested for human action and activity recognition and prediction. The evaluation results and outcomes of this research have been published in 8 peer-reviewed conference proceedings along with a best paper ward, and 1 peer- reviewed journal paper.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号