首页> 外文期刊>Research journal of applied science, engineering and technology >CAST: A constant Adaptive Skipping Training Algorithm for Improving the Learning Rate of Multilayer Feedforward Neural Networks
【24h】

CAST: A constant Adaptive Skipping Training Algorithm for Improving the Learning Rate of Multilayer Feedforward Neural Networks

机译:CAST:用于提高多层前馈神经网络学习率的恒定自适应跳过训练算法

获取原文
获取原文并翻译 | 示例
           

摘要

Multilayer Feedforward Neural Network (MFNN) has been administered widely for solving a wide range of supervised pattern recognition tasks. The major problem in the MFNN training phase is its long training time especially when it is trained on very huge training datasets. In this accordance, an enhanced training algorithm called Constant Adaptive Skipping Training (CAST) Algorithm is proposed in this research paper which intensifies on reducing the training time of the MFNN through stochastic manifestation of training datasets. The stochastic manifestation is accomplished by partitioning the training dataset into two completely separate classes, classified and misclassified class, based on the comparison result of the calculated error measure with the threshold value. Only the input samples in the misclassified class are exhibited to the MFNN for training in the next epoch, whereas the correctly classified class is skipped constantly which dynamically reducing the number of training input samples exhibited at every single epoch. Thus decreasing the size of the training dataset constantly can reduce the total training time, thereby speeding up the training process. This CAST algorithm can be merged with any training algorithms used for supervised task, can be used to train the dataset with any number of patterns and also it is very simple to implement. The evaluation of the proposed CAST algorithm is demonstrated effectively using the benchmark datasets - Iris, Waveform, Heart Disease and Breast Cancer for different learning rate. Simulation study proved that CAST training algorithm results in faster training than LAST and standard BPN algorithm.
机译:多层前馈神经网络(MFNN)已被广泛管理,以解决各种各样的监督模式识别任务。 MFNN训练阶段的主要问题是训练时间长,尤其是在非常庞大的训练数据集上训练时。据此,本文提出了一种增强的训练算法,称为恒自适应跳跃训练(CAST)算法,该算法通过训练数据集的随机表示,加强了减少MFNN的训练时间。根据计算出的误差量度与阈值的比较结果,通过将训练数据集划分为两个完全独立的类别(分类和错误分类的类别)来实现随机表现。仅将错误分类的类别中的输入样本显示给MFNN,以便在下一个时期进行训练,而正确分类的类别将不断跳过,这会动态减少每个单个时期所显示的训练输入样本的数量。因此,不断减少训练数据集的大小可以减少总训练时间,从而加快训练过程。此CAST算法可以与用于监督任务的任何训练算法合并,可以用于训练具有任意数量模式的数据集,并且实现起来非常简单。使用基准数据集-虹膜,波形,心脏病和乳腺癌针对不同的学习率有效地证明了所提出的CAST算法的评估。仿真研究证明,CAST训练算法比LAST和标准BPN算法具有更快的训练速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号