...
【24h】

Vehicle Detection Based on an Imporved Faster R-CNN Method

机译:基于改进的R-CNN方法的车辆检测

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we present a novel method for vehicle detection based on the Faster R-CNN frame. We integrate MobileNet into Faster R-CNN structure. First, the MobileNet is used as the base network to generate the feature map. In order to retain the more information of vehicle objects, a fusion strategy is applied to multi-layer features to generate a fused feature map. The fused feature map is then shared by region proposal network (RPN) and Fast R-CNN. In the RPN system, we employ a novel dimension cluster method to predict the anchor sizes, instead of choosing the properties of anchors manually. Our detection method improves the detection accuracy and saves computation resources. The results show that our proposed method respectively achieves 85.21% and 91.16% on the mean average precision (mAP) for DIOR dataset and UA-DETRAC dataset, which are respectively 1.32% and 1.49% improvement than Faster R-CNN (ResNet152). Also, since less operations and parameters are required in the base network, our method costs the storage size of 42.52MB, which is far less than 214.89MB of Faster R-CNN(ResNet50).
机译:在本文中,我们提出了一种基于更快的R-CNN帧的车辆检测方法。我们将MobileNet整合到更快的R-CNN结构中。首先,MobileNet用作基础网络以生成特征映射。为了保留车辆对象的更多信息,将融合策略应用于多层特征以生成融合特征图。然后由区域提议网络(RPN)和FAST R-CNN共享融合特征图。在RPN系统中,我们采用了一种新颖的尺寸集群方法来预测锚尺寸,而不是手动选择锚的属性。我们的检测方法提高了检测准确性并节省了计算资源。结果表明,我们所提出的方法分别达到Dior数据集和UA-DetRAC数据集的平均平均精度(MAP)的85.21%和91.16%,分别比R-CNN(Resnet152)更快的1.32%和1.49%。此外,由于基础网络中需要较少的操作和参数,因此我们的方法成本为42.52MB的存储大小,远低于R-CNN的速度较快(Resnet50)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号