首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition Workshops >Extraction and Classification of Diving Clips from Continuous Video Footage
【24h】

Extraction and Classification of Diving Clips from Continuous Video Footage

机译:连续录像潜水夹的提取与分类

获取原文

摘要

Due to recent advances in technology, the recording and analysis of video data has become an increasingly common component of athlete training programmes. Today it is incredibly easy and affordable to set up a fixed camera and record athletes in a wide range of sports, such as diving, gymnastics, golf, tennis, etc. However, the manual analysis of the obtained footage is a time-consuming task which involves isolating actions of interest and categorizing them using domain-specific knowledge. In order to automate this kind of task, three challenging sub-problems are often encountered: 1) temporally cropping events/actions of interest from continuous video; 2) tracking the object of interest; and 3) classifying the events/actions of interest. Most previous work has focused on solving just one of the above sub-problems in isolation. In contrast, this paper provides a complete solution to the overall action monitoring task in the context of a challenging real-world exemplar. Specifically, we address the problem of diving classification. This is a challenging problem since the person (diver) of interest typically occupies fewer than 1% of the pixels in each frame. The model is required to learn the temporal boundaries of a dive, even though other divers and bystanders may be in view. Finally, the model must be sensitive to subtle changes in body pose over a large number offrames to determine the classification code. We provide effective solutions to each of the sub-problems which combine to provide a highly functional solution to the task as a whole. The techniques proposed can be easily generalized to video footage recorded from other sports.
机译:由于近期技术进步​​,视频数据的记录和分析已成为运动员培训计划的越来越常见的组成部分。今天它非常简单,实惠,在各种运动中设置固定的相机和录制运动员,如潜水,体操,高尔夫,网球等,但是,所获得的镜头的手动分析是耗时的任务这涉及使用域的知识隔离利益行动并对其进行分类。为了自动化这种任务,通常遇到三个具有挑战性的子问题:1)从连续视频中临时裁剪事件/动感的动作; 2)跟踪感兴趣的对象; 3)对感兴趣的事件/行动进行分类。最先前的工作主要集中在孤立中仅解决上述子问题之一。相比之下,本文在挑战现实世界范围的情况下,为整体行动监测任务提供了完整的解决方案。具体而言,我们解决了潜水分类问题。这是一个具有挑战性的问题,因为感兴趣的人(潜水员)通常占据每帧中的像素的少于1 %。即使其他潜水者和旁观者可能正在观察,该模型需要学习潜水的时间边界。最后,模型必须对身体姿势的微妙变化敏感,以确定分类代码。我们为每个子问题提供有效的解决方案,该问题组合为整个任务提供高度功能的解决方案。所提出的技术可以容易地推广到其他运动中记录的视频素材。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号