首页> 外文期刊>Automation, Control and Intelligent Systems >Research on Face Recognition Algorithm Based on Improved Residual Neural Network
【24h】

Research on Face Recognition Algorithm Based on Improved Residual Neural Network

机译:基于改进的残余神经网络的人脸识别算法研究

获取原文
       

摘要

The residual neural network is prone to two problems when it is used in the process of face recognition: the first is "overfitting", and the other is the slow or non-convergence problem of the loss function of the network in the later stage of training. In this paper, in order to solve the problem of "overfitting", this paper increases the number of training samples by adding Gaussian noise and salt and pepper noise to the original image to achieve the purpose of enhancing the data, and then we added "dropout" to the network, which can improve the generalization ability of the network. In addition, we have improved the loss function and optimization algorithm of the network. After analyzing the three loss functions of Softmax, center, and triplet, we consider their advantages and disadvantages, and propose a joint loss function. Then, for the optimization algorithm that is widely used through the network at present, that is the Adam algorithm, although its convergence speed is relatively fast, but the convergence results are not necessarily satisfactory. According to the characteristics of the sample iteration of the convolutional neural network during the training process, in this paper, the memory factor and momentum ideas are introduced into the Adam optimization algorithm. This can increase the speed of network convergence and improve the effect of convergence. Finally, this paper conducted simulation experiments on the data-enhanced ORL face database and Yale face database, which proved the feasibility of the method proposed in this paper. Finally, this paper compares the time-consuming and power consumption of network training before and after the improvement on the CMU_PIE database, and comprehensively analyzes their performance.
机译:当在面部识别过程中使用时,剩余神经网络易于出现两个问题:第一个是“过度装箱”,另一个是网络在后期阶段的损耗功能的慢或非收敛性问题训练。在本文中,为了解决“过度装备”的问题,本文通过向原始图像添加高斯噪声和盐和辣椒噪声来增加训练样本的数量,以达到增强数据的目的,然后添加“丢弃“到网络,可以提高网络的泛化能力。此外,我们改进了网络的损耗功能和优化算法。在分析Softmax,Center和Triplet的三个损耗功能后,我们考虑其优缺点,并提出联合损失功能。然后,对于目前广泛使用的网络优化算法,即ADAM算法,尽管其收敛速度相对较快,但收敛结果不一定令人满意。根据在训练过程中卷积神经网络的样本迭代的特征,在本文中,将记忆因子和动量思想引入了ADAM优化算法。这可以提高网络收敛速度,提高收敛效果。最后,本文对数据增强的orl面部数据库和耶鲁面部数据库进行了仿真实验,这证明了本文提出的方法的可行性。最后,本文比较了CMU_PIE数据库的改进之前和之后的网络培训的耗时和功耗,并综合分析了它们的性能。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号