【24h】

Asymmetric Ternary Networks

机译:非对称三元网络

获取原文

摘要

Deep Neural Networks (DNNs) are widely used in a variety of machine learning tasks currently, especially in speech recognition and image classification. However, the huge demand for memory and computational power makes DNNs cannot be deployed on embedded devices efficiently. In this paper, we propose asymmetric ternary networks (ATNs) - neural networks with weights constrained to ternary values (-α_1, 0, + α_2), which can reduce the DNN models size by about 16 × compared with 32-bits full precision models. Scaling factors {α_1, α_2} are used to reduce the quantization loss between ternary weights and full precision weights. We compare ATNs with recently proposed ternary weight networks (TWNs) and full precision networks on CIFAR-10 and ImageNet datasets. The results show that our ATN models outperform full precision models of VGG13, VGG16 by 0.11%, 0.33% respectively on CIFAR-10. On ImageNet, our model outperforms TWN AlexNet model by 2.25% of Top-1 accuracy and has only 0.63% accuracy degradation over the full-precision counterpart.
机译:深度神经网络(DNN)广泛用于目前各种机器学习任务,尤其是语音识别和图像分类。然而,对存储器和计算能力的巨大需求使DNN能够有效地部署在嵌入式设备上。在本文中,我们提出了非对称的三元网络(ATNS) - 与约束到三进制值(-α_1,0,+α_2)加权神经网络,其可以由大约16减小DNN模型尺寸×有32位全精度模型相比。缩放因子{α_1,α_2}被用于减少三元权重和全精度之间的权重的量化损失。我们将ATYS与最近提出的三元权重网络(TWNS)和CIFAR-10和Imagenet数据集上的完整精度网络进行比较。结果表明,我们的ATN型号分别跑赢VGG13,VGG16的全精度的模型由0.11%,0.33%的CIFAR-10。在ImageNet,我们的模型优于TWN AlexNet模型由顶1精度2.25%,并拥有完整的精密配对只有0.63%的准确度降低。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号