...
首页> 外文期刊>Quality Control, Transactions >Mixed Road User Trajectory Extraction From Moving Aerial Videos Based on Convolution Neural Network Detection
【24h】

Mixed Road User Trajectory Extraction From Moving Aerial Videos Based on Convolution Neural Network Detection

机译:基于卷积神经网络检测的混合道路用户轨迹提取从移动空中视频

获取原文
获取原文并翻译 | 示例
           

摘要

Vehicle trajectory data under mixed traffic conditions provides critical information for urban traffic flow modeling and analysis. Recently, the application of unmanned aerial vehicles (UAV) creates a potential of reducing traffic video collection cost and enhances flexibility at the spatial-temporal coverage, supporting trajectory extraction in diverse environments. However, accurate vehicle detection is a challenge due to facts such as small vehicle size and inconspicuous object features in UAV videos. In addition, camera motion in UAV videos hardens the trajectory construction procedure. This research aims at proposing a novel framework for accurate vehicle trajectory construction from UAV videos under mixed traffic conditions. Firstly, a Convolution Neural Network (CNN)-based detection algorithm, named You Only Look Once (YOLO) v3, is applied to detect vehicles globally. Then an image registration method based on Shi-Tomasi corner detection is applied for camera motion compensation. Trajectory construction methods are proposed to obtain accurate vehicle trajectories based on data correlation and trajectory compensation. At last, the ensemble empirical mode decomposition (EEMD) is applied for trajectory data denoising. Our framework is tested on three aerial videos taken by an UAV on urban roads with one including intersection. The extracted vehicle trajectories are compared with manual counts. The results show that the proposed framework achieves an average Recall of 91.91 & x0025; for motor vehicles, 81.98 & x0025; for non-motorized vehicles and 78.13 & x0025; for pedestrians in three videos.
机译:混合交通条件下的车辆轨迹数据为城市交通流模型和分析提供了关键信息。近日,无人机(UAV)的应用程序创建减少交通视频采集成本和提高灵活性的时空范围,在不同的环境配套轨迹提取的潜力。然而,由于UAV视频中的小型车辆尺寸和不起眼的对象特征如小型车辆,准确的车辆检测是一个挑战。此外,UAV视频中的相机运动使轨迹施工过程进行了硬化。本研究旨在提出在混合交通条件下从UAV视频提出精确车辆轨迹建设的新框架。首先,基于卷积神经网络(CNN)的检测算法,命名为您只有一次(YOLO)V3,用于在全球范围内检测车辆。然后应用了基于Shi-Tomasi拐角检测的图像登记方法,用于相机运动补偿。建议基于数据相关性和轨迹补偿来获得准确的车辆轨迹的轨迹施工方法。最后,施用集合经验模式分解(EEMD)用于轨迹数据去噪。我们的框架在城市道路上的三个航空视频上进行了测试,其中包括一个包括十字路口的城市道路。将提取的车辆轨迹与手动计数进行比较。结果表明,拟议的框架达到了91.91&x0025的平均召回;适用于机动车,81.98&X0025;对于非机动车辆和78.13&x0025;对于三个视频的行人。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号