In this paper, we present a neural network classifier training method based on dynamic data reduction (DDR) to address long training times and the poor generalization ability of neural network classifiers.In our approach, we assigned each sample a weight value, which was then dynamically updated based on the classification error rate at each iteration of the training sample.Subsequently, the training sample was reduced based on the weight of the sample so as to increase the proportion of boundary samples in error-prone classification environments and to reduce the role of redundant kernel samples.Our numerical experiments show that our neural network training method not only substantially shortens the training time of the given networks, but also significantly enhances the classification and generalization abilities of the network.%针对神经网络分类器训练时间长、泛化能力差的问题,提出了一种基于动态数据约简的神经网络分类器训练方法(DDR).该训练方法在训练过程中赋给每个训练样本一个权重值作为样本的重要性度量,依据每次网络迭代训练样本的分类错误率动态更新每个训练样本的权重值,之后依据样本的权重值来约简训练样本,从而增加易错分类的边界样本比重,减少冗余核样本的作用.数值实验表明,基于权重的动态数据约简神经网络训练方法不仅大幅缩短了网络的训练时间,而且还能够显著提升网络的分类泛化能力.
展开▼