...
首页> 外文期刊>Computer vision and image understanding >Task dependent deep LDA pruning of neural networks
【24h】

Task dependent deep LDA pruning of neural networks

机译:任务依赖神经网络的深层LDA修剪

获取原文
获取原文并翻译 | 示例
           

摘要

With deep learning's success, a limited number of popular deep nets have been widely adopted for various vision tasks. However, this usually results in unnecessarily high complexities and possibly many features of low task utility. In this paper, we address this problem by introducing a task-dependent deep pruning framework based on Fisher's Linear Discriminant Analysis (LDA). The approach can be applied to convolutional, fully-connected, and module-based deep network structures, in all cases leveraging the high decorrelation of neuron motifs found in the pre-decision space and cross-layer deconv dependency. Moreover, we examine our approach's potential in network architecture search for specific tasks and analyze the influence of our pruning on model robustness to noises and adversarial attacks. Experimental results on datasets of generic objects (ImageNet, CIFAR100) as well as domain specific tasks (Adience, and LFWA) illustrate our framework's superior performance over state-of-the-art pruning approaches and fixed compact nets (e.g. SqueezeNet, MobileNet). The proposed method successfully maintains comparable accuracies even after discarding most parameters (98%-99% for VGG16, up to 82% for the already compact InceptionNet) and with significant FLOP reductions (83% for VGG16, up to 64% for InceptionNet). Through pruning, we can also derive smaller, but more accurate and more robust models suitable for the task.
机译:随着深度学习的成功,各种愿景任务已广泛采用有限数量的流行深度网。然而,这通常会导致不必要的高复杂性以及可能的低任务实用程序的许多功能。在本文中,我们通过引入基于Fisher的线性判别分析(LDA)的任务依赖性的深度修剪框架来解决这个问题。该方法可以应用于卷积,完全连接和基于模块的深网络结构,所有情况下都利用了在预决定空间和横向解码器依赖关系中发现的神经元图案的高果皮。此外,我们研究了我们的方法在网络架构中寻求特定任务的潜力,并分析了我们修剪对噪声和对抗攻击模型稳健性的影响。关于通用物体数据集(ImageNet,CiFar100)以及域特定任务(景点和LFWA)的实验结果说明了我们框架的卓越性能,而不是最先进的修剪方法和固定的紧凑型网(例如Screezenet,Mobilenet)。即使在丢弃大多数参数(对于VGG16的98%-99%)后,所提出的方法也成功地保持了可比的精度,并且对于已经紧凑的成反比度的82%)并且具有显着的浮动减少(VGG16的83%,Incepionnet最高可达64%)。通过修剪,我们还可以得出更小,但更准确,更强大的型号适合任务。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号