首页> 外文会议>SPIE Defense + Security Conference >Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
【24h】

Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks

机译:了解对抗性攻击和对深压缩神经网络的防御

获取原文

摘要

Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting applications such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of-the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely ''adversarial examples", causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.
机译:现代深度神经网络(DNN)一直在展示许多令人兴奋的应用中的现象成功,例如计算机视觉,语音识别和自然语言处理,得益于最近的机器学习模型创新和计算硬件进步。然而,最近的研究表明,最先进的DNN可以容易地通过仔细制作的输入扰动来欺骗,甚至人类眼睛甚至是难以察觉的,即“对抗的例子”,导致基于DNN的智能系统的新兴安全问题。此外,为了缓解快速增长的DNN模型规模施加的密集计算和内存资源要求,通过各种硬件有利的DNN技术(即散列,深压缩,循环投影)积极修剪冗余模型参数已成为必需品。这个程序进一步复杂化DNN系统的安全问题。在本文中,我们首先在各种对抗攻击下研究了硬件导向的深压缩DNN的脆弱性。然后我们调查了现有的缓解方法,如梯度蒸馏,最初是对软件量身定制的基于DNN系统。灵感来自梯度蒸馏和重量重塑,我们进一步开发了近零 - 成本但有效的梯度沉默(GS)方法,用于保护基于软件和基于硬件的DNN系统免受对抗攻击的影响。与防御蒸馏相比,我们的梯度显着性方法可以在没有额外训练的情况下实现更好的抗逆性攻击对抗性攻击,同时仍然对MNIST和CIFAR10这样的各种图像分类基准测试跨越小型和大DNN模型的高精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号