首页> 外文会议>IEEE International Conference on Image Processing >Lightweight Neural Networks From PCA LDA Based Distilled Dense Neural Networks
【24h】

Lightweight Neural Networks From PCA LDA Based Distilled Dense Neural Networks

机译:基于PCA和LDA的精馏神经网络的轻量神经网络

获取原文

摘要

This paper presents two methods for building lightweight neural networks with similar accuracy than heavyweight ones with the advantage to be less greedy in memory and computing resources. So it can be implemented in edge and IoT devices. The presented distillation methods are respectively based on Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The two methods basically rely on the successive dimension reduction of a given dense neural network (teacher) hidden features, and the learning of a smaller neural network (student) which solves the initial learning problem along with a mapping problem to the reduced successive features spaces. The presented methods are compared to baselines –learning the student networks from scratch–, and we show that the additional mapping problem significantly improves the performance (accuracy, memory and computing resources) of the student networks.
机译:本文提出了两种构建轻量级神经网络的方法,这些方法的准确性与重量级神经网络的相似,其优点是在内存和计算资源上的贪婪程度更低。因此,它可以在边缘和物联网设备中实现。提出的蒸馏方法分别基于主成分分析(PCA)和线性判别分析(LDA)。两种方法基本上都依赖于给定的密集神经网络(教师)隐藏特征的连续维数缩减,以及对较小神经网络(学生)的学习,这解决了初始学习问题以及对缩减后的连续特征空间的映射问题。将提出的方法与基准进行了比较(从头开始学习学生网络),并且我们证明了其他映射问题显着提高了学生网络的性能(准确性,内存和计算资源)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号