摘要:Deep Learning is one of the most popular computer science techniques,with applications in natural language processing,image processing,pattern iden-tification,and various otherfields.Despite the success of these deep learning algorithms in multiple scenarios,such as spam detection,malware detection,object detection and tracking,face recognition,and automatic driving,these algo-rithms and their associated training data are rather vulnerable to numerous secur-ity threats.These threats ultimately result in significant performance degradation.Moreover,the supervised based learning models are affected by manipulated data known as adversarial examples,which are images with a particular level of noise that is invisible to humans.Adversarial inputs are introduced to purposefully con-fuse a neural network,restricting its use in sensitive application areas such as bio-metrics applications.In this paper,an optimized defending approach is proposed to recognize the adversarial iris examples efficiently.The Curvelet Transform Denoising method is used in this defense strategy,which examines every sub-band of the adversarial images and reproduces the image that has been changed by the attacker.The salient iris features are retrieved from the reconstructed iris image by using a pre-trained Convolutional Neural Network model(VGG 16)fol-lowed by Multiclass classification.The classification is performed by using Sup-port Vector Machine(SVM)which uses Particle Swarm Optimization method(PSO-SVM).The proposed system is tested when classifying the adversarial iris images affected by various adversarial attacks such as FGSM,iGSM,and Deep-fool methods.An experimental result on benchmark iris dataset,namely IITD,produces excellent outcomes with the highest accuracy of 95.8%on average.