首页> 外文会议>International Joint Conference on Neural Networks >Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night
【24h】

Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night

机译:使用无监督的图像到图像转换进行跨域汽车检测:从白天到晚上

获取原文

摘要

Deep learning techniques have enabled the emergence of state-of-the-art models to address object detection tasks. However, these techniques are data-driven, delegating the accuracy to the training dataset which must resemble the images in the target task. The acquisition of a dataset involves annotating images, an arduous and expensive process, generally requiring time and manual effort. Thus, a challenging scenario arises when the target domain of application has no annotated dataset available, making tasks in such situation to lean on a training dataset of a different domain. Sharing this issue, object detection is a vital task for autonomous vehicles where the large amount of driving scenarios yields several domains of application requiring annotated data for the training process. In this work, a method for training a car detection system with annotated data from a source domain (day images) without requiring the image annotations of the target domain (night images) is presented. For that, a model based on Generative Adversarial Networks (GANs) is explored to enable the generation of an artificial dataset with its respective annotations. The artificial dataset (fake dataset) is created translating images from day-time domain to night-time domain. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model. Experimental results showed that the proposed method achieved significant and consistent improvements, including the increasing by more than 10% of the detection performance when compared to the training with only the available annotated data (i.e., day images).
机译:深度学习技术使得出现了用于处理对象检测任务的最新模型。但是,这些技术是数据驱动的,将准确性委托给训练数据集,该训练数据集必须类似于目标任务中的图像。数据集的获取涉及对图像进行注释,这是一个艰巨而昂贵的过程,通常需要时间和人工。因此,当应用程序的目标域没有可用的带注释的数据集时,就会出现具有挑战性的情况,使这种情况下的任务依赖于其他域的训练数据集。与这个问题共享的是,目标检测对于自动驾驶汽车来说是至关重要的任务,在自动驾驶汽车中,大量的驾驶场景会产生多个应用领域,需要在训练过程中使用带注释的数据。在这项工作中,提出了一种使用来自源域(白天图像)的带注释数据训练汽车检测系统而无需目标域(夜间图像)的图像注释的方法。为此,探索了基于生成对抗网络(GAN)的模型,以能够生成带有相应注释的人工数据集。创建了人造数据集(假数据集),将图像从白天域转换为夜间域。然后,仅包含目标域带注释的图像(夜间图像)的伪数据集将用于训练汽车探测器模型。实验结果表明,与仅使用可用注释数据(即日图像)进行的训练相比,该方法实现了显着且一致的改进,包括将检测性能提高了10%以上。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号