首页> 外文会议>Conference on Medical Imaging: Image Processing >Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images
【24h】

Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images

机译:来自MR图像腰椎圆盘形状重建卷积神经网络的对抗鲁棒性研究

获取原文

摘要

Machine learning technologies using deep neural networks (DNNs), especially convolutional neural networks (CNNs), have made automated, accurate, and fast medical image analysis a reality for many applications, and some DNN-based medical image analysis systems have even been FDA-cleared. Despite the progress, challenges remain to build DNNs as reliable as human expert doctors. It is known that DNN classifiers may not be robust to noises: by adding a small amount of noise to an input image, a DNN classifier may make a wrong classification of the noisy image (i.e., in-distribution adversarial sample), whereas it makes the right classification of the clean image. Another issue is caused by out-of-distribution samples that are not similar to any sample in the training set. Given such a sample as input, the output of a DNN will become meaningless. In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images. To study the relationship between dataset size and robustness to IND adversarial attacks, we used a data augmentation method to create training sets with different levels of shape variations. We utilized the PGD-based algorithm for IND adversarial attacks and extended it for OOD adversarial attacks to generate OOD adversarial samples for model testing. The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness. However, it is still a challenge to defend against OOD adversarial attacks.
机译:使用深神经网络(DNN),特别是卷积神经网络(CNNS)的机器学习技术已经为许多应用进行了自动化,准确,快速的医学图像分析,并且一些基于DNN的医学图像分析系统甚至是FDA-清除。尽管有进展,但挑战仍然可以将DNN视为人类专家医生可靠。众所周知,DNN分类器可能对噪声不稳健:通过向输入图像添加少量噪声,DNN分类器可以对噪声图像(即,分布在分布的侵权样本)进行错误分类,而它使得清洁图像的正确分类。另一个问题是由与培训集中的任何样本类似的分布式样本引起的。鉴于这样的样品作为输入,DNN的输出将变得毫无意义。在这项研究中,我们研究了来自脊柱MR图像的腰圆盘形状重建代表性CNN的分布(IND)和分配(OOD)对抗鲁棒性。要研究数据集大小与鲁棒性与Ind逆势攻击之间的关系,我们使用了数据增强方法来创建具有不同形状变化级别的培训集。我们利用基于PGD的算法进行Ind对抗性攻击,并将其扩展为ood对抗性攻击以产生用于模型测试的ood对抗性样本。结果表明,Ind逆势训练可以改善CNN稳健性对ind对抗性攻击,更大的训练数据集可能导致更高的IND鲁棒性。然而,抵御ood的对抗性攻击仍然是一项挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号