...
首页> 外文期刊>Journal of VLSI signal processing systems for signal, image, and video technology >CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks
【24h】

CyNAPSE: A Low-power Reconfigurable Neural Inference Accelerator for Spiking Neural Networks

机译:Cynnapse:用于尖峰神经网络的低功耗可重构神经推理加速器

获取原文
获取原文并翻译 | 示例
           

摘要

While neural network models keep scaling in depth and computational requirements, biologically accurate models are becoming more interesting for low-cost inference. Coupled with the need to bring more computation to the edge in resource-constrained embedded and IoT devices, specialized ultra-low power accelerators for spiking neural networks are being developed. Having a large variance in the models employed in these networks, these accelerators need to be flexible, user-configurable, performant and energy efficient. In this paper, we describe CyNAPSE, a fully digital accelerator designed to emulate neural dynamics of diverse spiking networks. Since the use case of our implementation is primarily concerned with energy efficiency, we take a closer look at the factors that could improve its energy consumption. We observe that while majority of its dynamic power consumption can be credited to memory traffic, its on-chip components suffer greatly from static leakage. Given that the event-driven spike processing algorithm is naturally memory-intensive and has a large number of idle processing elements, it makes sense to tackle each of these problems towards a more efficient hardware implementation. With a diverse set of network benchmarks, we incorporate a detailed study of memory patterns that ultimately informs our choice of an application-specific network-adaptive memory management strategy to reduce dynamic power consumption of the chip. Subsequently, we also propose and evaluate a leakage mitigation strategy for runtime control of idle power. Using both the RTL implementation and a software simulation of CyNAPSE, we measure the relative benefits of these undertakings. Results show that our adaptive memory management policy results in up to 22% more reduction in dynamic power consumption compared to conventional policies. The runtime leakage mitigation techniques show that up to 99.92% and at least 14% savings in leakage energy consumption is achievable in CyNAPSE hardware modules.
机译:虽然神经网络模型在深度和计算要求中保持缩放,但生物学准确的模型对于低成本推断而言变得更加有趣。再加上资源受限嵌入式和物联网设备中的边缘提供更多计算,正在开发用于尖峰神经网络的专用超低功耗加速器。这些加速器在这些网络中采用的模型具有较大的方差,需要灵活,用户可配置,性能和节能。在本文中,我们描述了一个全数字加速器的Cynapse,旨在模拟各种尖刺网络的神经动态。由于我们实施的用例主要关注能源效率,我们仔细看看可以提高其能源消耗的因素。我们观察到,虽然其动态功耗的大部分可以归功于内存流量,其片上组件极大地从静态泄漏遭受。鉴于事件驱动的尖峰处理算法自然内存密集型并且具有大量怠速处理元素,它是有意义的,以朝着更有效的硬件实现来解决这些问题。通过多样化的网络基准测试,我们纳入了一个关于内存模式的详细研究,最终会通知我们选择特定于应用的网络自适应内存管理策略,以降低芯片的动态功耗。随后,我们还提出并评估了怠速控制的运行时间控制泄漏缓解策略。使用RTL实现和Cynapsy的软件仿真,我们衡量了这些企业的相对益处。结果表明,与传统政策相比,我们的自适应内存管理政策导致动态功耗降低高达22%。运行时泄漏缓解技术表明,在Cynapse硬件模块中可以实现高达99.92%的泄漏能耗节省至少14%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号