首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Compressing Deep Neural Networks With Sparse Matrix Factorization
【24h】

Compressing Deep Neural Networks With Sparse Matrix Factorization

机译:压缩具有稀疏矩阵分解的深神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

Modern deep neural networks (DNNs) are usually overparameterized and composed of a large number of learnable parameters. One of a few effective solutions attempts to compress DNN models via learning sparse weights and connections. In this article, we follow this line of research and present an alternative framework of learning sparse DNNs, with the assistance of matrix factorization. We provide an underlying principle for substituting the original parameter matrices with the multiplications of highly sparse ones, which constitutes the theoretical basis of our method. Experimental results demonstrate that our method substantially outperforms previous states of the arts for compressing various DNNs, giving rich empirical evidence in support of its effectiveness. It is also worth mentioning that, unlike many other works that focus on feedforward networks like multi-layer perceptrons and convolutional neural networks only, we also evaluate our method on a series of recurrent networks in practice.
机译:现代深度神经网络(DNN)通常是过度分辨,由大量的学习参数组成。一些有效的解决方案之一尝试通过学习稀疏权重和连接来压缩DNN模型。在本文中,我们在矩阵分解的帮助下,我们遵循这一研究和学习稀疏DNN的替代框架。我们提供了用高度稀疏的乘法代替原始参数矩阵的潜在原则,这构成了我们方法的理论基础。实验结果表明,我们的方法显着优于先前的技术态,以压缩各种DNN,以支持其有效性的丰富的经验证据。还值得一提的是,与许多其他作品不同,这些作品专注于馈电网,如多层的感知和卷积神经网络,我们还在实践中评估了一系列经常性网络的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号