【24h】

Dropping Pixels for Adversarial Robustness

机译:抗逆性鲁棒性的像素

获取原文

摘要

Deep neural networks are vulnerable against adversarial examples. In this paper, we propose to train and test the networks with randomly subsampled images with high drop rates. We show that this approach significantly improves robustness against adversarial examples in all cases of bounded L0, L2 and L∞ perturbations, while reducing the standard accuracy by a small value. We argue that subsampling pixels can be thought to provide a set of robust features for the input image and, thus, improves robustness without performing adversarial training.
机译:深度神经网络易受对抗性的例子。在本文中,我们建议使用具有高滴加率的随机限速图像来培训和测试网络。我们表明,在所有有界L0,L2和L∞扰动的情况下,这种方法显着改善对抗对抗的实例的鲁棒性,同时通过小值降低标准精度。我们认为可以认为可以认为输入图像的一组鲁棒特征,因此,在不进行对抗性训练的情况下提高鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号