首页> 外文期刊>Frontiers in Neuroscience >Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes
【24h】

Boosting Throughput and Efficiency of Hardware Spiking Neural Accelerators Using Time Compression Supporting Multiple Spike Codes

机译:使用时间压缩支撑多穗码的硬件尖峰神经加速器的吞吐量和效率

获取原文
           

摘要

Spiking neural networks (SNNs) are the third generation of neural networks and can explore both rate and temporal coding for energy-efficient event-driven computation. However, the decision accuracy of existing SNN designs is contingent upon processing a large number of spikes over a long period. Nevertheless, the switching power of SNN hardware accelerators is proportional to the number of spikes processed while the length of spike trains limits throughput and static power efficiency. This paper presents the first study on developing temporal compression to significantly boost throughput and reduce energy dissipation of digital hardware SNN accelerators while being applicable to multiple spike codes. The proposed compression architectures consist of low-cost input spike compression units, novel input-and-output-weighted spiking neurons, and reconfigurable time constant scaling to support large and flexible time compression ratios. Our compression architectures can be transparently applied to any given pre-designed SNNs employing either rate or temporal codes while incurring minimal modification of the neural models, learning algorithms, and hardware design. Using spiking speech and image recognition datasets, we demonstrate the feasibility of supporting large time compression ratios of up to 16×, delivering up to 15.93×, 13.88×, and 86.21× improvements in throughput, energy dissipation, the tradeoffs between hardware area, runtime, energy, and classification accuracy, respectively based on different spike codes on a Xilinx Zynq-7000 FPGA. These results are achieved while incurring little extra hardware overhead.
机译:尖峰神经网络(SNNS)是第三代神经网络,可以探索节能事件驱动计算的速率和时间编码。然而,现有SNN设计的决策准确性在长期处理大量尖峰时取决于处理大量尖峰。然而,SNN硬件加速器的开关功率与处理的尖峰数量成比例,而尖峰列车的长度限制了吞吐量和静态功率效率。本文介绍了开发时间压缩的第一次研究,以显着提高吞吐量,减少数字硬件SNN加速器的能耗,同时适用于多个尖峰码。所提出的压缩架构包括低成本输入尖峰压缩单元,新型输入和输出加权尖峰神经元,以及可重新配置的时间常数缩放,以支持大型和灵活的时间压缩比。我们的压缩架构可以透明地应用于使用速率或时间代码的任何给定预先设计的SNN,同时产生神经模型,学习算法和硬件设计的最小修改。使用Spiking语音和图像识别数据集,我们证明了高达16倍的大型时间压缩比的可行性,提供高达15.93倍,13.88×和86.21×吞吐量,能耗,硬件区之间的权衡的提高,运行时,能量和分类准确性,分别基于Xilinx Zynq-7000 FPGA的不同尖峰码。这些结果是在引起额外的额外硬件开销时实现的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号