首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition Workshops >Exploring the Granularity of Sparsity in Convolutional Neural Networks
【24h】

Exploring the Granularity of Sparsity in Convolutional Neural Networks

机译:探索卷积神经网络稀疏性的粒度

获取原文

摘要

Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarsegrained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% - 35% of memory references compared with fine-grained sparsity.
机译:稀疏性通过用零跳过乘法来帮助减少DNN的计算复杂性。稀疏性的粒度影响了硬件架构的效率和预测准确性。在本文中,我们定量测量与不同粒度的精度稀疏关系。粗粒稀疏性带来了更规则的稀疏性模式,使硬件加速更容易,我们的实验结果表明,在没有准确性损失的情况下,粗鲁的稀疏对稀疏性的影响非常小。此外,由于折射率节省效果,粗粒颗粒稀疏性能够以相同的精度阈值获得比细粒稀疏性更好的或甚至更好的压缩率。我们的分析,基于最近稀疏卷积神经网络(SCNN)加速器的框架,进一步证明它节省了与细粒度稀疏性相比的30 % - 35 %的内存引用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号