首页> 外文会议>International conference on neuroinformatics >Competitive Maximization of Neuronal Activity in Convolutional Recurrent Spiking Neural Networks
【24h】

Competitive Maximization of Neuronal Activity in Convolutional Recurrent Spiking Neural Networks

机译:卷积复发尖峰神经网络中神经元活动的竞争最大化

获取原文

摘要

Spiking neural networks (SNNs) are the promising algorithm for specific neurochip hardware real-time solutions. SNNs are believed to be highly energy and computationally efficient. We focus on developing local learning rules that are capable to provide both supervised and unsupervised learning. We suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons. This principle was put at the basis of SNN learning algorithm called FEELING. Here we introduce efficient Convolutional Recurrent Spiking Neural Network architecture that uses FEELING rules and provides better results than fully connected SNN on MNIST benchmark having 55 times less learnable weight parameters.
机译:尖峰神经网络(SNNS)是特定神经芯板硬件实时解决方案的有希望的算法。 SNNS被认为是高能量和计算效率。我们专注于开发能够提供监督和无人监督的学习的本地学习规则。我们认为生物网络中的每个神经元在与其他神经元竞争中倾向于最大化其活性。这一原则是以称为感觉的SNN学习算法的基础。在这里,我们介绍了使用感受规则的高效卷积复制尖峰神经网络架构,并提供比Mnist基准上的完全连接的SNN提供更好的结果,其具有55倍可被学习的重量参数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号