首页> 外文期刊>Emerging and Selected Topics in Circuits and Systems, IEEE Journal on >Enabling Non-Hebbian Learning in Recurrent Spiking Neural Processors With Hardware-Friendly On-Chip Intrinsic Plasticity
【24h】

Enabling Non-Hebbian Learning in Recurrent Spiking Neural Processors With Hardware-Friendly On-Chip Intrinsic Plasticity

机译:通过硬件友好的片上本征可塑性在循环峰值神经处理器中启用非希伯来语学习

获取原文
获取原文并翻译 | 示例
           

摘要

Intrinsic plasticity (IP) is a non-Hebbian learning mechanism that self-adapts intrinsic parameters of each neuron as opposed to synaptic weights, offering complimentary opportunities for learning performance improvement. However, integrating IP onchip to enable per-neuron self-adaptation can lead to very large design overheads. This paper is the first work exploring efficient on-chip non-Hebbian IP learning for neural accelerators based on the recurrent spiking neural network model of the liquid state machine (LSM). The proposed LSM neural processor integrated with onchip IP is improved in terms of cost-effectiveness from both algorithmic and hardware design points of view. We optimize a baseline IP rule, which gives the state-of-the-art learning performance, to enable a feasible hardware onchip integration and further propose a new hardware-friendly IP rule SpiKL-IFIP. The hardware LSM neural accelerator with onchip IP is dramatically improved in area/power overhead as well as training latency with the proposed new IP rule and its optimized implementation. On the Xilinx ZC706 FPGA board, the proposed co-optimization dramatically improves the cost-effectiveness of on-chip IP. Self-adapting reservoir neurons using IP boosts the classification accuracy by up to 10.33% on the TI46 speech corpus and 8% on the TIMIT acoustic-phonetic dataset. Moreover, the proposed techniques reduce training energy by up to 49.6% and resource utilization by up to 64.9% while gracefully trading off classification accuracy for design efficiency.
机译:固有可塑性(IP)是一种非希伯来语的学习机制,它可以自适应地调节每个神经元的内在参数(而不是突触权重),从而为学习性能的提高提供了额外的机会。但是,集成IP芯片以实现每个神经元的自适应会导致非常大的设计开销。本文是基于液体状态机(LSM)的循环峰值神经网络模型探索有效的片上非Hebbian IP神经加速器的第一项工作。从算法和硬件设计的角度来看,所提出的与片上IP集成的LSM神经处理器在成本效益方面得到了改进。我们优化了基准IP规则,以提供最先进的学习性能,以实现可行的硬件片上集成,并进一步提出了新的硬件友好IP规则SpiKL-IFIP。带有片上IP的硬件LSM神经加速器通过建议的新IP规则及其优化的实现极大地改善了面积/功耗开销并训练了等待时间。在Xilinx ZC706 FPGA板上,所提出的共同优化功能极大地提高了片上IP的成本效益。使用IP的自适应水库神经元在TI46语音语料库上的分类准确率提高了10.33%,在TIMIT语音数据集上的分类准确率提高了8%。此外,所提出的技术可减少多达49.6%的训练能量和多达64.9%的资源利用率,同时可以在权衡分类准确度的同时提高设计效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号