首页> 美国卫生研究院文献>PLoS Clinical Trials >Towards a General Theory of Neural Computation Based on Prediction by Single Neurons
【2h】

Towards a General Theory of Neural Computation Based on Prediction by Single Neurons

机译:基于单神经元预测的神经计算通论

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information (“prediction error” or “surprise”). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most “new” information about future reward. To minimize the error in its predictions and to respond only when excitation is “new and surprising,” the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.
机译:尽管在理解神经系统的机制方面已经取得了巨大的进步,但是还没有关于其计算功能的一般理论。在这里,我提出了一种理论,该理论将单个通用神经元的既定生物物理特性与贝叶斯概率论,强化学习和有效编码的原理联系起来。我建议该理论解决神经系统面临的一般计算问题。提出每个神经元以反映整个系统在学习中预测与未来奖励有关的世界方面的功能。根据该模型,典型的神经元从其兴奋性突触输入的子集接收有关世界状态的当前信息,并从其其他输入接收先验信息。先验信息将由代表空间不同区域的突触输入以及代表过去不同时期的不同类型的非突触电压调节通道提供。提出了神经元的膜电压来发出电流和先验信息之间的差异(“预测误差”或“惊奇”)。神经元将应用Hebbian可塑性规则来选择与奖励最密切相关但最不可预测的那些兴奋性输入,因为不可预测的输入会为神经元提供有关未来奖励的“最新”信息。为了最小化其预测中的错误并仅在激发是“新的和令人惊讶的”时做出响应,神经元通过反希伯来规则在其先前的信息源中进行选择。因此,成熟的神经元的独特输入将源于其局部环境以及外部世界的空间和时间模式的学习。因此,该理论描述了成熟的神经系统的结构如何能够反映外部世界的结构,以及系统的复杂性和智能如何从一群未分化的神经元发展而来,每个神经元都实现了相似的学习算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号