首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >On-Line Node Fault Injection Training Algorithm for MLP Networks: Objective Function and Convergence Analysis
【24h】

On-Line Node Fault Injection Training Algorithm for MLP Networks: Objective Function and Convergence Analysis

机译:MLP网络的在线节点故障注入训练算法:目标函数和收敛性分析

获取原文
获取原文并翻译 | 示例
       

摘要

Improving fault tolerance of a neural network has been studied for more than two decades. Various training algorithms have been proposed in sequel. The on-line node fault injection-based algorithm is one of these algorithms, in which hidden nodes randomly output zeros during training. While the idea is simple, theoretical analyses on this algorithm are far from complete. This paper presents its objective function and the convergence proof. We consider three cases for multilayer perceptrons (MLPs). They are: 1) MLPs with single linear output node; 2) MLPs with multiple linear output nodes; and 3) MLPs with single sigmoid output node. For the convergence proof, we show that the algorithm converges with probability one. For the objective function, we show that the corresponding objective functions of cases 1) and 2) are of the same form. They both consist of a mean square errors term, a regularizer term, and a weight decay term. For case 3), the objective function is slight different from that of cases 1) and 2). With the objective functions derived, we can compare the similarities and differences among various algorithms and various cases.
机译:改善神经网络的容错性已经研究了二十多年。已经提出了各种训练算法。基于在线节点故障注入的算法是这些算法之一,其中隐藏节点在训练期间随机输出零。尽管这个想法很简单,但是对该算法的理论分析还远远没有完成。本文介绍了其目标函数和收敛性证明。我们考虑了多层感知器(MLP)的三种情况。它们是:1)具有单个线性输出节点的MLP; 2)具有多个线性输出节点的MLP; 3)具有单个S型输出节点的MLP。对于收敛性证明,我们表明该算法以概率1收敛。对于目标函数,我们表明情况1)和2)的相应目标函数具有相同的形式。它们都由均方误差项,正则项和权重衰减项组成。对于情况3),目标函数与情况1)和2)略有不同。通过导出目​​标函数,我们可以比较各种算法和各种情况之间的异同。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号