...
首页> 外文期刊>Computers & Security >Secure deep neural networks using adversarial image generation and training with Noise-GAN
【24h】

Secure deep neural networks using adversarial image generation and training with Noise-GAN

机译:使用对抗性图像生成和使用Noise-GAN进行训练来保护深度神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Recent advances in artificial intelligence have increased the importance of security issues. Nowadays, deep neural networks (DNNs) are used in many critical applications such as pilot drones and self-driving cars. So, the DNN's malfunction, due to an attack, may cause irreparable damages. The attack may happen either in training phase (poisoning attacks) or testing phase (evasion attacks) by presenting adversarial examples. These samples are maliciously created to deceive DNNs. This paper deals with evasion attacks and aims to immunize DNNs through adversarial examples generation and training. We propose Noise-GAN, a Generative Adversarial Network (GAN) with a multi-class Discriminator for producing a noise that by adding it to the original image adversarial examples can be obtained. In this paper, various types of evasion attacks are considered and performance of the proposed method is evaluated on different victim models under various defensive strategies. Experimental results are based on MNIST and CIFAR10 datasets and the average success rates for different attacks are reported and compared with state-of-the-art methods. The Non-targeted attack success rates on DNNs after training by adversarial examples, generated by Noise-GAN, were declined from 87.7% to 10.41% using MNIST dataset and from 91.2% to 57.66% using CIFAR-10 dataset. (C) 2019 Elsevier Ltd. All rights reserved.
机译:人工智能的最新进展增加了安全性问题的重要性。如今,深度神经网络(DNN)已用于许多关键应用,例如无人驾驶飞机和自动驾驶汽车。因此,由于受到攻击,DNN的故障可能会造成无法弥补的损失。通过提供对抗性示例,攻击可以在训练阶段(中毒攻击)或测试阶段(规避攻击)发生。这些样本被恶意创建以欺骗DNN。本文涉及规避攻击,旨在通过对抗性示例生成和训练来免疫DNN。我们提出Noise-GAN,这是一种具有多类鉴别器的生成对抗网络(GAN),用于产生噪声,该噪声可通过将其添加到原始图像中来获得对抗示例。在本文中,考虑了各种类型的逃避攻击,并在各种防御策略下,针对不同的受害者模型评估了该方法的性能。实验结果基于MNIST和CIFAR10数据集,报告了不同攻击的平均成功率,并与最新方法进行了比较。使用MNIST数据集,由Noise-GAN生成的对抗样本训练后,对DNN的非目标攻击成功率从87.7%下降到10.41%,使用CIFAR-10数据集从91.2%下降到57.66%。 (C)2019 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号