首页> 中文期刊> 《中国电子杂志(英文版)》 >CNQ: Compressor-Based Non-uniform Quantization of Deep Neural Networks

CNQ: Compressor-Based Non-uniform Quantization of Deep Neural Networks

         

摘要

Deep neural networks(DNNs) have achieved state-of-the-art performance in a number of domains but suffer intensive complexity.Network quantization can effectively reduce computation and memory costs without changing network structure,facilitating the deployment of DNNs on mobile devices.While the existing methods can obtain good performance,low-bit quantization without time-consuming training or access to the full dataset is still a challenging problem.In this paper,we develop a novel method named Compressorbased non-uniform quantization(CNQ) method to achieve non-uniform quantization of DNNs with few unlabeled samples.Firstly,we present a compressor-based fast nonuniform quantization method,which can accomplish nonuniform quantization without iterations.Secondly,we propose to align the feature maps of the quantization model with the pre-trained model for accuracy recovery.Considering the property difference between different activation channels,we utilize the weighted-entropy perchannel to optimize the alignment loss.In the experiments,we evaluate the proposed method on image classification and object detection.Our results outperform the existing post-training quantization methods,which demonstrate the effectiveness of the proposed method.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号