首页> 外文会议>2017 International Conference on Security, Pattern Analysis, and Cybernetics >Convolutional neural networks based scale-adaptive kernelized correlation filter for robust visual object tracking
【24h】

Convolutional neural networks based scale-adaptive kernelized correlation filter for robust visual object tracking

机译:基于卷积神经网络的尺度自适应核相关滤波器用于鲁棒视觉目标跟踪

获取原文
获取原文并翻译 | 示例

摘要

Visual object tracking is challenging when the object appearances occur significant changes, such as scale change, background clutter, occlusion, and so on. In this paper, we crop different sizes of multiscale templates around object and input these multiscale templates into network to pretrain the network adaptive the size change of tracking object. Different from previous the tracking method based on deep convolutional neural network (CNN), we exploit deep Residual Network (ResNet) to offline train a multiscale object appearance model on the ImageNet, and then the features from pretrained network are transferred into tracking tasks. Meanwhile, the proposed method combines the multilayer convolutional features, it is robust to disturbance, scale change, and occlusion. In addition, we fuse multiscale search strategy into three kernelized correlation filter, which strengthens the ability of adaptive scale change of object. Unlike the previous methods, we directly learn object appearance change by integrating multiscale templates into the ResNet. We compared our method with other CNN-based or correlation filter tracking methods, the experimental results show that our tracking method is superior to the existing state-of-the-art tracking method on Object Tracking Benchmark (OTB-2015) and Visual Object Tracking Benchmark (VOT-2015).
机译:当对象外观发生重大变化(例如比例变化,背景杂波,遮挡等)时,视觉对象跟踪将具有挑战性。在本文中,我们围绕对象裁剪了不同大小的多尺度模板,并将这些多尺度模板输入到网络中,以预训练网络以适应跟踪对象的大小变化。与以前的基于深度卷积神经网络(CNN)的跟踪方法不同,我们利用深度残差网络(ResNet)在ImageNet上离线训练多尺度对象外观模型,然后将来自预训练网络的特征转换为跟踪任务。同时,该方法结合了多层卷积特征,对干扰,尺度变化和遮挡具有鲁棒性。此外,我们将多尺度搜索策略融合到三个核化的相关滤波器中,从而增强了对象的自适应尺度变化的能力。与以前的方法不同,我们通过将多尺度模板集成到ResNet中直接学习对象外观的变化。我们将我们的方法与其他基于CNN或相关过滤器的跟踪方法进行了比较,实验结果表明,我们的跟踪方法优于现有的最新对象跟踪基准(OTB-2015)和可视对象跟踪方法基准(VOT-2015)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号