首页> 外文期刊>IEEE Transactions on Electron Devices >An Improved RRAM-Based Binarized Neural Network With High Variation-Tolerated Forward/Backward Propagation Module
【24h】

An Improved RRAM-Based Binarized Neural Network With High Variation-Tolerated Forward/Backward Propagation Module

机译:一种改进的基于RRAM的二值化神经网络,具有高变化的正向/向后传播模块

获取原文
获取原文并翻译 | 示例
       

摘要

Binarized neural network (BNN) enables resistive switching random access memory (RRAM) with high nonlinearity and nonsymmetry to realize online training, using an RRAM comparator structure. In this work, a new hardware implementation approach is proposed to improve the efficiency of BNN. In the approach, an 1T1R array-based propagation module is introduced and designed to realize the computing acceleration of fully parallel vector-matrix multiplication (VMM) in both forward and backward propagations. Using the 1T1R-based propagation module, high computing efficiency is achieved in both training and inference tasks, improving by $50imes $ and $177imes $ , respectively. To solve the computation error caused by device variation, a novel operation scheme with low gate voltage is proposed. With the operation scheme, the RRAM variation is dramatically suppressed by 74.8 for cycle-to-cycle and 59.9 for device-to-device. It enables high-accuracy VMM calculation and, therefore, achieves 94.7 accuracy with a typical BNN, showing only 0.7 degradation from the ideal variation-free case.
机译:二值化神经网络(BNN)使电阻切换随机存取存储器(RRAM)具有高非线性和非对称,使用RRAM比较器结构实现在线培训。在这项工作中,提出了一种新的硬件实现方法来提高BNN的效率。在该方法中,引入了基于1T1R阵列的传播模块,并设计成在前向和后向传播中实现完全并行矢量矩阵乘法(VMM)的计算加速度。使用基于1T1R的传播模块,在培训和推理任务中实现了高计算效率,分别提高了50美元的50美元和177美元。为了解决由设备变化引起的计算误差,提出了一种具有低栅极电压的新型操作方案。利用操作方案,RRAM变化在74.8中被显着抑制了74.8,用于循环到循环和59.9,用于设备到设备。它能够实现高精度VMM计算,因此,使用典型的BNN实现94.7精度,显示出从理想的无变化案例的0.7降解。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号