首页> 外文期刊>Journal of neurosurgical sciences >A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation
【24h】

A Fast Two-Stage Black-Box Deep Learning Network Attacking Method Based on Cross-Correlation

机译:基于交叉相关的快速两级黑匣子深度学习网络攻击方法

获取原文
获取原文并翻译 | 示例
           

摘要

Deep learning networks are widely used in various systems that require classification. However, deep learning networks are vulnerable to adversarial attacks. The study on adversarial attacks plays an important role in defense. Black-box attacks require less knowledge about target models than white-box attacks do, which means black-box attacks are easier to launch and more valuable. However, the state-of-arts black-box attacks still suffer in low success rates and large visual distances between generative adversarial images and original images. This paper proposes a kind of fast black-box attack based on the cross-correlation (FBACC) method. The attack is carried out in two stages. In the first stage, an adversarial image, which would be missclassified as the target label, is generated by using gradient descending learning. By far the image may look a lot different than the original one. Then, in the second stage, visual quality keeps getting improved on the condition that the label keeps being missclassified. By using the cross-correlation method, the error of the smooth region is ignored, and the number of iterations is reduced. Compared with the proposed black-box adversarial attack methods, FBACC achieves a better fooling rate and fewer iterations. When attacking LeNetS and AlexNet respectively, the fooling rates are 100% and 89.56%. When attacking them at the same time, the fooling rate is 69.78%. FBACC method also provides a new adversarial attack method for the study of defense against adversarial attacks.
机译:深度学习网络广泛用于需要分类的各种系统中。然而,深度学习网络容易受到对抗的攻击。对抗性袭击的研究在防御中起着重要作用。黑匣子攻击需要更少了解目标模型的知识,而不是白盒攻击,这意味着黑匣子攻击更容易发射和更有价值。然而,最先进的黑匣子攻击仍然遭受低成功率和生成的对抗性图像和原始图像之间的大视觉距离。本文提出了一种基于互相关(FBACC)方法的快速黑匣子攻击。攻击是在两个阶段进行的。在第一阶段,通过使用梯度下行学习来生成将被遗弃为目标标签的对抗性图像。到目前为止,图像可能看起来与原始的不同。然后,在第二阶段,视觉质量在标签不被遗漏的情况下保持改善。通过使用互相关方法,忽略平滑区域的误差,并且减少了迭代的数量。与建议的黑匣子对抗攻击方法相比,FBACC实现了更好的愚蠢率和更少的迭代。当分别攻击Lenets和AlexNet时,愚蠢的速率为100%和89.56%。当同时攻击它们时,愚蠢率为69.78%。 FBACC方法还为防御对抗攻击进行了研究,提供了一种新的对抗性攻击方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号