首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Non-Divergence of Stochastic Discrete Time Algorithms for PCA Neural Networks
【24h】

Non-Divergence of Stochastic Discrete Time Algorithms for PCA Neural Networks

机译:PCA神经网络的随机离散时间算法的非散度

获取原文
获取原文并翻译 | 示例
           

摘要

Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.
机译:学习算法在基于主成分分析的神经网络的实际应用中起着重要作用,通常决定这些应用的成功与否。这些算法不能发散,但是直接研究它们的收敛特性非常困难,因为它们是由随机离散时间(SDT)算法描述的。本摘要直接分析了原始SDT算法,并通过选择适当的学习参数来推导一些不变集,以保证在随机环境中这些算法的不偏离。我们的理论结果通过一系列的仿真例子得到了验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号