首页> 外文会议>Neural Information Processing pt.1; Lecture Notes in Computer Science; 4232 >Convergence of Batch BP Algorithm with Penalty for FNN Training
【24h】

Convergence of Batch BP Algorithm with Penalty for FNN Training

机译:具有惩罚性的批量BP算法在FNN训练中的收敛性

获取原文
获取原文并翻译 | 示例

摘要

Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the monotonicity of the error function with the penalty term during the training iteration. A relationship between the learning rate parameter and the penalty parameter is proposed to guarantee the convergence. The algorithm is applied to two classification problems to support our theoretical findings.
机译:惩罚方法已普遍用于改善前馈神经网络的泛化性能并控制网络权重的大小。提出了批处理BP算法的权限有界性和收敛性的结果,并对带有隐藏层的前馈神经网络进行惩罚。证明的关键是训练迭代过程中误差函数与惩罚项的单调性。提出了学习率参数和惩罚参数之间的关系,以保证收敛。该算法应用于两个分类问题,以支持我们的理论发现。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号