首页>
外国专利>
SPARSITY CONSTRAINTS AND KNOWLEDGE DISTILLATION BASED LEARNING OF SPARSER AND COMPRESSED NEURAL NETWORKS
SPARSITY CONSTRAINTS AND KNOWLEDGE DISTILLATION BASED LEARNING OF SPARSER AND COMPRESSED NEURAL NETWORKS
展开▼
机译:基于稀疏约束和知识稀疏的稀疏和压缩神经网络学习
展开▼
页面导航
摘要
著录项
相似文献
摘要
In deep neural network research is porting the memory- and computation-intensivenetwork models on embedded platforms with a minimal compromise in modelaccuracy. Embodiments of the present disclosure build a Bayesian studentnetworkusing the knowledge learnt by an accurate but complex pre-trained teachernetwork,and sparsity induced by the variational parameters in a student network.Further, thesparsity inducing capability of the teacher on the student network is learntbyemploying a Block Sparse Regularizer on a concatenated tensor of teacher andstudentnetwork weights. Specifically, the student network is trained using thevariationallower bound based loss function, constrained on the hint from the teacher, andblock-sparsityof weights.
展开▼