...
首页> 外文期刊>IEEE Journal of Solid-State Circuits >A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS
【24h】

A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS

机译:4096-Neuron 1M-Synapse 3.8pJ / SOP尖峰神经网络,具有片上STDP学习功能和10nm FinFET CMOS的稀疏权重

获取原文
获取原文并翻译 | 示例
           

摘要

A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16x with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4x lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3-mu Weuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Nearthreshold operation, in conjunction with temporal and spatial sparsity, reduces energy by 17.4x to 1.0-mu J/classification in a 236 x 20 feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7-mu J/classification.
机译:开发了一种可重新配置的10纳米FinFET CMOS中的4096神经元,1M突触芯片,以加快推理和学习多种尖峰神经网络(SNN)的速度。 SNN具有用于泄漏集成和激发神经元模型的数字电路,片上与时序相关的可塑性(STDP)学习以及高扇出组播峰值通信功能。结构化的细粒度的稀疏性将突触内存减少了多达16倍,而用于存储连接的开销却不到2%。近似计算共同优化了滴流控制,并受益于算法噪声,以时空峰值模式处理时能量降低了9.4倍。 SNN在0.9 V时达到25.2 GSOP / s的峰值吞吐量,在525 mV时达到3.8 pJ / SOP的峰值能量效率,在450 mV时达到2.3μW /神经元工作。片上无监督STDP训练尖峰受限的Boltzmann机器以对美国国家标准与技术研究院(MNIST)的数字进行消噪,并重建具有0.036的RMSE的自然场景图像。近阈值操作与时间和空间稀疏性相结合,在236 x 20前馈网络中将能量降低了17.4倍,降至1.0-mu J /分类,该网络经过训练可使用监督性STDP对MNIST数字进行分类。使用错误反向传播对具有50%稀疏权重的二进制激活多层感知器进行脱机训练,以1.7-μJ/分类的精度将97.9%的MNIST数字分类。

著录项

相似文献

  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号