首页> 外文期刊>The Visual Computer >DTW-CNN: time series-based human interaction prediction in videos using CNN-extracted features
【24h】

DTW-CNN: time series-based human interaction prediction in videos using CNN-extracted features

机译:DTW-CNN:使用CNN提取的特征的视频中基于序列的人类交互预测

获取原文
获取原文并翻译 | 示例
           

摘要

Recently, the prediction of interactions in videos has been an active subject in computer vision. Its goal is to deduce interactions in their early stages. Many approaches have been proposed to predict interaction, but it still remains a challenging problem. In the present paper, features are optical flow fields extracted from video frames using convolutional neural networks. This feature, which is extracted from successive frames, constructs a time series. Then, the problem is modeled in the form of a time series prediction. Prediction of the interaction type is based on matching the time series under experiment with the time series available in the training set. Dynamic time warping provides an optimal match between a pair of time-series data by a nonlinear mapping between two data. Finally, the SVM and KNN classification methods with dynamic time warping distance are used to predict the video label. The results showed that the proposed model improved on standard interaction recognition datasets including the TVHI, BIT, and UT interaction.
机译:最近,视频中的相互作用的预测是计算机视觉中的主题。其目标是推断出早期阶段的互动。已经提出了许多方法来预测互动,但它仍然是一个具有挑战性的问题。在本文中,特征是使用卷积神经网络从视频帧中提取的光学流场。从连续帧中提取的此功能构造时间序列。然后,问题以时间序列预测的形式建模。相互作用类型的预测是基于在实验下的时间序列与训练集中可用的时间序列匹配。动态时间扭曲在两个数据之间的非线性映射之间提供一对时间序列数据之间的最佳匹配。最后,使用具有动态时间翘曲距离的SVM和KNN分类方法来预测视频标签。结果表明,所提出的模型改善了包括TVHI,位和UT交互的标准交互识别数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号