首页> 外文会议>IEEE International System-on-Chip Conference >Efficient hardware architecture of softmax layer in deep neural network
【24h】

Efficient hardware architecture of softmax layer in deep neural network

机译:深神经网络中的软MAX层的高效硬件架构

获取原文

摘要

Deep neural network (DNN) has emerged as a very important machine learning and pattern recognition technique in the big data era. Targeting to different types of training and inference tasks, the structure of DNN varies with flexible choices of different component layers, such as fully connection layer, convolutional layer, pooling layer and softmax layer. Deviated from other layers that only require simple operations like addition or multiplication, the softmax layer contains expensive exponentiation and division, thereby causing the hardware design of softmax layer suffering from high complexity, long critical path delay and overflow problems. This paper, for the first time, presents efficient hardware architecture of softmax layer in DNN. By utilizing the domain transformation technique and down-scaling approach, the proposed hardware architecture avoids the aforementioned problems. Analysis shows that the proposed hardware architecture achieves reduced hardware complexity and critical path delay.
机译:深度神经网络(DNN)已成为大数据时代的一个非常重要的机器学习和模式识别技术。针对不同类型的培训和推理任务,DNN的结构随着不同组件层的灵活选择而变化,例如完全连接层,卷积层,池层和软墨袋层。偏离仅需要简单操作的其他层,如添加或乘法,软MAX层包含昂贵的指数和分割,从而导致软MAX层的硬件设计患有高复杂性,长临界路径延迟和溢出问题。本文首次呈现DNN中的Softmax层的有效硬件架构。通过利用域变换技术和下缩放方法,所提出的硬件架构避免了上述问题。分析表明,所提出的硬件架构实现了减少的硬件复杂性和关键路径延迟。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号