首页> 外文会议>14th International Conference on Computer and Information Technology >Training neural network with damped oscillation and maximized gradient function
【24h】

Training neural network with damped oscillation and maximized gradient function

机译:利用阻尼振荡和最大梯度函数训练神经网络

获取原文
获取原文并翻译 | 示例

摘要

Constant learning rate (LR) which is most widely used for training neural networks (NNs) in back propagation (BP) but it is not usually preferable due to its slow convergence rate while using small learning rate and it also shows less accuracy while using higher learning rate. In this paper, we are proposing a faster and supervised algorithm which shows more accuracy in a few iterations while dealing with neural networks (NNs). Training of NNs with damped oscillation and maximized gradient function (DOMG) deals with the implementation of damped oscillation in learning rate called damped learning rate (DLR) by which we get more accuracy in a few iterations and maximized gradient function is used for fast weight updating. DOMG is significantly tested on eight real world benchmark classification problems such as heart disease, ionosphere, Australian credit card, time series, wine, horse, glass and soybean identification. The proposed DOMG outperforms the existing BP in terms of convergence rate and generalization ability.
机译:恒定学习率(LR)在反向传播(BP)中最广泛地用于训练神经网络(NN),但由于在使用较小学习率时收敛速度慢,而在使用较高学习率时准确性较低,因此通常不受欢迎学习率。在本文中,我们提出了一种更快且受监督的算法,该算法在处理神经网络(NN)时,在几次迭代中显示出更高的准确性。利用阻尼振荡和最大梯度函数(DOMG)训练NN的过程涉及阻尼振荡在学习率(称为阻尼学习率(DLR))中的实现,通过它我们可以在几次迭代中获得更高的准确性,并且最大梯度函数用于快速权重更新。 DOMG已针对八个现实世界的基准分类问题(例如心脏病,电离层,澳大利亚信用卡,时间序列,酒,马,玻璃和大豆的识别)进行了严格的测试。提出的DOMG在收敛速度和泛化能力方面都优于现有的BP。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号