首页> 外文会议>IEEE International Joint Conference on Neural Networks >A Constrained-Optimization Approach to Training Neural Networks for Smooth Function Approximation and System Identification
【24h】

A Constrained-Optimization Approach to Training Neural Networks for Smooth Function Approximation and System Identification

机译:一种受限优化方法来训练光滑函数近似和系统识别的神经网络

获取原文

摘要

A constrained-backpropagation training technique is presented to suppress interference and preserve prior knowledge in sigmoidal neural networks, while new information is learned incrementally. The technique is based on constrained optimization, and minimizes an error function subject to a set of equality constraints derived via an algebraic training approach. As a result, sigmoidal neural networks with long term procedural memory (also known as implicit knowledge) can be obtained and trained repeatedly on line, without experiencing interference. The generality and effectiveness of this approach is demonstrated through three applications, namely, function approximation, solution of differential equations, and system identification. The results show that the long term memory is maintained virtually intact, and may lead to computational savings because the implicit knowledge provides a lasting performance baseline for the neural network.
机译:提出了一个受约束的反向训练技术来抑制统计神经网络中的干扰和保存先前知识,而逐步学习新信息。该技术基于受约束的优化,并最小化经由代数训练方法导出的一组平等约束的错误功能。结果,可以在线上重复获得和训练具有长期程序存储器(也称为隐式知识)的Sigmoidal神经网络,而不会经历干扰。通过三个应用,即功能近似,微分方程解决方案和系统识别,证明了这种方法的一般性和有效性。结果表明,长期内存维持几乎完整,并且可能导致计算节省,因为隐式知识为神经网络提供了持久的性能基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号