首页> 外文会议>IEEE International Conference on Neural Networks >Simple recurrent networks as generalized hidden Markov models with distributed representations
【24h】

Simple recurrent networks as generalized hidden Markov models with distributed representations

机译:简单的经常性网络作为具有分布式表示的概括隐马尔可夫模型

获取原文

摘要

Proposes simple recurrent neural networks as probabilistic models for representing and predicting time-sequences. The proposed model has the advantage of providing forecasts that consist of probability densities instead of single guesses of future values. It turns out that the model can be viewed as a generalized hidden Markov model with a distributed representation. The authors devise an efficient learning algorithm for estimating the parameters of the model using dynamic programming. The authors present some very preliminary simulation results to demonstrate the potential capabilities of the model. The present analysis provides a new probabilistic formulation of learning in simple recurrent networks.
机译:提出简单的经常性神经网络作为用于代表和预测时间序列的概率模型。所提出的模型具有提供由概率密度组成的预测而不是单一猜测未来值的优点。事实证明,该模型可以被视为具有分布式表示的广义隐马尔可夫模型。作者使用动态编程设计了一种高效的学习算法,用于估计模型的参数。作者展示了一些非常初步的模拟结果,以证明模型的潜在能力。本分析提供了简单复发网络中学习的新概率制定。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号