...
首页> 外文期刊>IEEE Journal of Solid-State Circuits >A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS
【24h】

A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS

机译:一个4096-neuron 1m-synapse 3.8-pj / sop尖刺神经网络,带有片上的stdp学习和稀疏的10-nm finfet cmos

获取原文
获取原文并翻译 | 示例
           

摘要

A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16x with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4x lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3-mu Weuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Nearthreshold operation, in conjunction with temporal and spatial sparsity, reduces energy by 17.4x to 1.0-mu J/classification in a 236 x 20 feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7-mu J/classification.
机译:可重新配置的4096-neuron,1M-Synapse芯片,以10-NM FinFET CMOS开发,以加速许多尖刺神经网络(SNNS)的推理和学习。 SNN采用数字电路,用于泄漏整合和消防神经元模型,片上尖峰定时依赖性塑性(STDP)学习,以及高扇出多播尖峰通信。结构化细粒度重量稀疏度最多可将突触内存减少16倍,对于存储连接小于2%的开销。近似计算共同优化算法噪声的滴流控制和益处,以处理高达9.4倍的时空尖峰图案。 SNN在525mV下以0.9V,峰值能量为3.8pJ / SOP的峰值产量,在450mV下为2.3-mu /神经元操作。片上无监督的STDP训练了一个尖峰限制的Boltzmann机器,以解除噪声改性国家标准和技术研究所(MNIST)数字,并重建了0.036的RMSE的自然场景图像。结合时间和空间稀疏的接近操作将能量降低17.4倍至1.0-mu j /分类,在236 x 20前馈网络中训练,以便使用监督的stdp对Mnist数字进行分类。具有50%稀疏重量的二进制激活多层Perceptron训练了偏离误差,以将MNIST数字分类为1.7-mu j /分类的精度为97.9%。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号