首页> 外国专利> SPARSITY CONSTRAINTS AND KNOWLEDGE DISTILLATION BASED LEARNING OF SPARSER AND COMPRESSED NEURAL NETWORKS

SPARSITY CONSTRAINTS AND KNOWLEDGE DISTILLATION BASED LEARNING OF SPARSER AND COMPRESSED NEURAL NETWORKS

机译:基于稀疏约束和知识稀疏的稀疏和压缩神经网络学习

摘要

In deep neural network research is porting the memory- and computation-intensivenetwork models on embedded platforms with a minimal compromise in modelaccuracy. Embodiments of the present disclosure build a Bayesian studentnetworkusing the knowledge learnt by an accurate but complex pre-trained teachernetwork,and sparsity induced by the variational parameters in a student network.Further, thesparsity inducing capability of the teacher on the student network is learntbyemploying a Block Sparse Regularizer on a concatenated tensor of teacher andstudentnetwork weights. Specifically, the student network is trained using thevariationallower bound based loss function, constrained on the hint from the teacher, andblock-sparsityof weights.
机译:在深度神经网络中,研究正在移植内存和计算密集的嵌入式平台上的网络模型,模型折衷最小准确性。本公开的实施例构建贝叶斯学生网络利用准确但复杂的受过训练的老师学到的知识网络,和由学生网络中的变化参数引起的稀疏性。此外,学习教师在学生网络上的稀疏诱导能力通过在教师的级联张量上使用块稀疏正则器学生网络权重。具体来说,使用变异的基于下限的损失函数,受老师提示的约束,并且块稀疏的重量。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号