首页> 外文期刊>Journal of visual communication & image representation >Reliable identification of redundant kernels for convolutional neural network compression
【24h】

Reliable identification of redundant kernels for convolutional neural network compression

机译:卷积神经网络压缩的冗余内核的可靠识别

获取原文
获取原文并翻译 | 示例
           

摘要

To compress deep convolutional neural networks (CNNs) with large memory footprint and long inference time, this paper proposes a novel pruning criterion based on layer-wise L-n-norms of feature maps to identify unimportant convolutional kernels. We calculate the L-n-norm of the feature map outputted by each convolutional kernel to evaluate the importance of the kernel. Furthermore, we use different L-n-norms for different layers, e.g., L-1-norm for the first convolutional layer, L-2-norm for middle convolutional layers and L-infinity-norm for the last convolutional layer. With the ability of accurately identifying unimportant convolutional kernels in each layer, the proposed method achieves a good balance between model size and inference accuracy. Experimental results on CIFAR, SVHN and ImageNet datasets and an application example in a railway intelligent surveillance system show that the proposed method outperforms existing kernel-norm-based methods and is generally applicable to any deep neural network with convolutional operations. (C) 2019 Elsevier Inc. All rights reserved.
机译:为了压缩具有较大内存占用量和较长推理时间的深度卷积神经网络(CNN),本文提出了一种基于特征图的分层L-n范数的新修剪准则,以识别不重要的卷积核。我们计算每个卷积核输出的特征图的L-n-范数,以评估核的重要性。此外,我们对不同的层使用不同的L-n范数,例如,第一卷积层的L-1-范数,中间卷积层的L-2-范数和最后一个卷积层的L-无穷范。该方法具有在每一层中准确识别不重要的卷积核的能力,从而在模型大小和推理精度之间取得了良好的平衡。在CIFAR,SVHN和ImageNet数据集上的实验结果以及在铁路智能监控系统中的应用示例表明,该方法优于现有的基于核范数的方法,并且通常适用于任何具有卷积运算的深度神经网络。 (C)2019 Elsevier Inc.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号