首页> 外文会议>ACM/IEEE Annual International Symposium on Computer Architecture >A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron Superconducting Technology
【24h】

A Stochastic-Computing based Deep Learning Framework using Adiabatic Quantum-Flux-Parametron Superconducting Technology

机译:基于随机计算的基于绝热量子通量 - 散流量超导技术的深度学习框架

获取原文

摘要

The Adiabatic Quantum-Flux-Parametron (AQFP) superconducting technology has been recently developed, which achieves the highest energy efficiency among superconducting logic families, potentially 104-105 gain compared with state-of-the-art CMOS. In 2016, the successful fabrication and testing of AQFP-based circuits with the scale of 83,000 JJs have demonstrated the scalability and potential of implementing large-scale systems using AQFP. As a result, it will be promising for AQFP in high-performance computing and deep space applications, with Deep Neural Network (DNN) inference acceleration as an important example. Besides ultra-high energy efficiency, AQFP exhibits two unique characteristics: the deep pipelining nature since each AQFP logic gate is connected with an AC clock signal, which increases the difficulty to avoid RAW hazards; the second is the unique opportunity of true random number generation (RNG) using a single AQFP buffer, far more efficient than RNG in CMOS. We point out that these two characteristics make AQFP especially compatible with the stochastic computing (SC) technique, which uses a time-independent bit sequence for value representation, and is compatible with the deep pipelining nature. Further, the application of SC has been investigated in DNNs in prior work, and the suitability has been illustrated as SC is more compatible with approximate computations. This work is the first to develop an SC-based DNN acceleration framework using AQFP technology. The deep-pipelining nature of AQFP circuits translates into the difficulty in designing accumulators/counters in AQFP, which makes the prior design in SC-based DNN not suitable. We overcome this limitation taking into account different properties in CONV and FC layers: (i) the inner product calculation in FC layers has more number of inputs than that in CONV layers; (ii) accurate activation function is critical in CONV rather than FC layers. Based on these observations, we propose (i) accurate integration of summation and activation function in CONV layers using bitonic sorting network and feedback loop, and (ii) low-complexity categorization block for FC layers based on chain of majority gates. For complete design we also develop (i) ultra-efficient stochastic number generator in AQFP, (ii) a high-accuracy sub-sampling (pooling) block in AQFP, and (iii) majority synthesis for further performance improvement and automatic buffer/splitter insertion for requirement of AQFP circuits. Experimental results suggest that the proposed SC-based DNN using AQFP can achieve up to 6.8 × 104 times higher energy efficiency compared to CMOS-based implementation while maintaining 96% accuracy on the MNIST dataset.
机译:最近开发了绝热量子 - 磁通 - 载体(AQFP)超导技术,这在超导逻辑系列中实现了最高的能量效率,可能是10 4 -10 5 与最先进的CMOS相比增益。 2016年,成功的制造和测试基于AQFP的电路的尺度为83,000 jJ的速度已经证明了使用AQFP实现大规模系统的可扩展性和潜力。因此,它将对高性能计算和深度空间应用中的AQFP有前途,具有深度神经网络(DNN)推理加速作为重要示例。除了超高能效率外,AQFP还具有两个独特的特性:由于每个AQFP逻辑门与AC时钟信号连接,因此增加了深度流水线,这增加了避免原始危险的难度;第二种是使用单个AQFP缓冲区的真实随机数生成(RNG)的独特机会,比CMOS中的RNG更有效。我们指出,这两个特征使AQFP与随机计算(SC)技术兼容,它使用时间无关的比特序列进行值表示,并且与深管线性质兼容。此外,在现有工作中已经在DNN中研究了SC的应用,并且如SC更兼容,并且与近似计算更兼容,所示的适用性已经被示出。这项工作是第一个使用AQFP技术开发基于SC的DNN加速框架。 AQFP电路的深管式性质转化为在AQFP中设计累加器/计数器的困难,这使得先前的设计在基于SC的DNN中不合适。我们克服了此限制考虑了CONC和FC层的不同属性:(i)FC层中的内部产品计算比CANC层中的输入更多的输入; (ii)准确的激活功能在CONV而不是FC层中至关重要。基于这些观察,我们建议(i)使用BITONIC分类网络和反馈回路的CONV层的总结和激活函数的准确集成,以及基于多数栅极链的FC层的低复杂性分类块。对于完整的设计,我们还在AQFP中开发(i)超高效的随机数发电机,(ii)AQFP中的高精度子采样(池)块,(iii)大多数合成,用于进一步的性能改进和自动缓冲器/分离器插入AQFP电路的要求。实验结果表明,使用AQFP的基于SC的DNN可以实现高达6.8×10 4 与基于CMOS的实现相比,能量效率更高,同时在MNIST数据集中保持96%的精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号