首页> 外文会议>International Conference on Soft Computing and Pattern Recognition >Strategies for Determining Effective Step Size of the Backpropagation Algorithm for On-Line Learning
【24h】

Strategies for Determining Effective Step Size of the Backpropagation Algorithm for On-Line Learning

机译:确定在线学习的反向化算法的有效步长策略的策略

获取原文

摘要

In this paper, we investigate proper strategies for determining the step size of the backpropagation (BP) algorithm for on-line learning. It is known that for off-line learning, the step size can be determined adaptively during learning. For on-line learning, since the same data may never appear again, we cannot use the same strategy proposed for off-line learning. If we do not update the neural network with a proper step size for on-line learning, the performance of the network may not be improved steadily. Here, we investigate four strategies for updating the step size. They are 1) constant, 2) random, 3) linearly decreasing, and 4) inversely proportional, respectively. The first strategy uses a constant step size during learning, the second strategy uses a random step size, the third strategy decreases the step size linearly, and the fourth strategy updates the step size inversely proportional to time. Experimental results show that, the third and the fourth strategies are more effective. In addition, compared with the third strategy, the fourth one is more stable, and usually can improve the performance steadily.
机译:在本文中,我们调查了确定用于在线学习的BackPropagation(BP)算法的步长的正常策略。已知用于离线学习,可以在学习期间自适应地确定步长。对于在线学习,由于相同的数据可能永远不会再出现,因此我们不能使用所提出的相同策略来偏离线路学习。如果我们不使用正确的步进尺寸更新神经网络,则网络的性能可能不会稳定地改善。在这里,我们调查了更新步长的四种策略。它们是1)常数,2)随机,3)分别线性减小和4)分别成反比。第一策略在学习期间使用恒定的步长大小,第二策略使用随机步长,第三策略线性地减小步长,并且第四策略更新与时间成反比的阶梯大小。实验结果表明,第三个和第四个策略更有效。此外,与第三策略相比,第四个是更稳定的,通常可以稳步提高性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号