首页> 美国卫生研究院文献>PLoS Clinical Trials >Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data
【2h】

Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

机译:基于粒子群优化的深度神经网络自动参数选择及其在大规模和高维数据中的应用

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks.
机译:在本文中,我们提出了一种新的自动超参数选择方法,该方法使用粒子群优化(PSO)结合最速梯度下降算法来确定深度神经网络的最佳网络配置(网络结构和超参数)。在提出的方法中,网络配置被编码为一组实数m维向量,作为搜索过程中PSO算法的个体。在搜索过程中,采用PSO算法通过在有限搜索空间中移动的粒子来搜索最佳网络配置,并且使用最陡峭的梯度下降算法以几个训练时期来训练DNN分类器(以找到局部最优值)。解决方案)。在优化方案之后,执行最陡峭的梯度下降算法需要更多的时间,并使用PSO算法的最终解决方案(pbest和gbest)分别训练最终的集成模型和各个DNN分类器。利用最陡梯度下降算法的局部搜索能力和PSO算法的全局搜索能力来确定接近全局最优的最优解。我们在手写字符和生物活性预测数据集上进行了多次实验,结果表明,由PSO算法最终解决方案表示的网络配置训练的DNN分类器用于构建整体模型和单个分类器,优于随机方法。泛化性能方面。因此,所提出的方法可以被视为用于深度神经网络的自动网络结构和参数选择的替代工具。

著录项

  • 期刊名称 PLoS Clinical Trials
  • 作者

    Fei Ye;

  • 作者单位
  • 年(卷),期 2011(12),12
  • 年度 2011
  • 页码 e0188746
  • 总页数 36
  • 原文格式 PDF
  • 正文语种
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号