首页> 外文期刊>Machine Vision and Applications >Multi-modal object detection and localization for high integrity driving assistance
【24h】

Multi-modal object detection and localization for high integrity driving assistance

机译:多模式物体检测和定位,提供高完整性的驾驶辅助

获取原文
获取原文并翻译 | 示例
           

摘要

Much work is currently devoted to increasing the reliability, completeness and precision of the data used by driving assistance systems, particularly in urban environments. Urban environments represent a particular challenge for the task of perception, since they are complex, dynamic and completely variable. This article examines a multi-modal perception approach for enhancing vehicle localization and the tracking of dynamic objects in a world-centric map. 3D ego-localization is achieved by merging stereo vision perception data and proprioceptive information from vehicle sensors. Mobile objects are detected using a multi-layer lidar that is simultaneously used to identify a zone of interest to reduce the complexity of the perception process. Object localization and tracking is then performed in a fixed frame which simplifies analysis and understanding of the scene. Finally, tracked objects are confirmed by vision using 3D dense reconstruction in focused regions of interest. Only confirmed objects can generate an alarm or an action on the vehicle. This is crucial to reduce false alarms that affect the trust that the driver places in the driving assistance system. Synchronization issues between the sensing modalities are solved using predictive filtering. Real experimental results are reported so that the performance of the multi-modal system may be evaluated.
机译:当前,许多工作致力于提高驾驶辅助系统使用的数据的可靠性,完整性和准确性,特别是在城市环境中。城市环境对于感知任务而言是一个特殊的挑战,因为它们是复杂,动态且完全可变的。本文研究了一种多模式感知方法,该方法可增强车辆的定位性以及在以世界为中心的地图中跟踪动态对象。通过合并来自车辆传感器的立体视觉感知数据和本体感受信息,可以实现3D自我定位。使用多层激光雷达检测移动物体,该激光雷达同时用于识别目标区域,以降低感知过程的复杂性。然后在固定的框架中执行对象定位和跟踪,从而简化了对场景的分析和理解。最后,在关注的关注区域中使用3D密集重建通过视觉确认跟踪的对象。只有已确认的物体才能在车辆上产生警报或动作。这对于减少影响驾驶员对驾驶辅助系统信任度的误报至关重要。使用预测滤波解决了感测模态之间的同步问题。报告了真实的实验结果,以便可以评估多模式系统的性能。

著录项

  • 来源
    《Machine Vision and Applications》 |2014年第3期|583-598|共16页
  • 作者单位

    Universite de Technologie de Compiegne (UTC), CNRS Heudiasyc UMR, 6599 Compiegne Cedex, France,Centre de Recherches de Royalheu, BP 20529, 60205 Compiegne Cedex, France;

    Universite de Technologie de Compiegne (UTC), CNRS Heudiasyc UMR, 6599 Compiegne Cedex, France,Centre de Recherches de Royalheu, BP 20529, 60205 Compiegne Cedex, France;

    Universite de Technologie de Compiegne (UTC), CNRS Heudiasyc UMR, 6599 Compiegne Cedex, France,Centre de Recherches de Royalheu, BP 20529, 60205 Compiegne Cedex, France;

    Universite de Technologie de Compiegne (UTC), CNRS Heudiasyc UMR, 6599 Compiegne Cedex, France,Centre de Recherches de Royalheu, BP 20529, 60205 Compiegne Cedex, France;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Multi-modal perception; Visual odometry; Object tracking; Dynamic map; Intelligent vehicles;

    机译:多模态感知;视觉里程表;对象跟踪;动态地图;智能汽车;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号