...
首页> 外文期刊>Computer Methods in Applied Mechanics and Engineering >Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization
【24h】

Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization

机译:基于Laplace逼近的Nesterov辅助随机梯度方法用于贝叶斯设计优化

获取原文
获取原文并翻译 | 示例
           

摘要

Finding the best setup for experiments is the primary concern for Optimal Experimental Design (OED). Here, we focus on the Bayesian experimental design problem of finding the setup that maximizes the Shannon expected information gain. We use the stochastic gradient descent and its accelerated counterpart, which employs Nesterov's method, to solve the optimization problem in OED. We adapt a restart technique, originally proposed for the acceleration in deterministic optimization, to improve stochastic optimization methods. We combine these optimization methods with three estimators of the objective function: the double-loop Monte Carlo estimator (DLMC), the Monte Carlo estimator using the Laplace approximation for the posterior distribution (MCLA) and the double-loop Monte Carlo estimator with Laplace-based importance sampling (DLMCIS). Using stochastic gradient methods and Laplace-based estimators together allows us to use expensive and complex models, such as those that require solving partial differential equations (PDEs). From a theoretical viewpoint, we derive an explicit formula to compute the gradient estimator of the Monte Carlo methods, including MCLA and DLMCIS. From a computational standpoint, we study four examples: three based on analytical functions and one using the finite element method. The last example is an electrical impedance tomography experiment based on the complete electrode model. In these examples, the accelerated stochastic gradient descent method using MCLA converges to local maxima with up to five orders of magnitude fewer model evaluations than gradient descent with DLMC. (C) 2020 Elsevier B.V. All rights reserved.
机译:为实验找到最佳设置是最佳实验设计(OED)的首要考虑。在这里,我们着重于贝叶斯实验设计问题,即寻找使Shannon预期信息增益最大化的设置。我们使用随机梯度下降及其加速的对策,采用Nesterov方法,来解决OED中的优化问题。我们采用了最初为确定性优化中的加速而提出的重新启动技术,以改进随机优化方法。我们将这些优化方法与目标函数的三个估算器结合在一起:双环蒙特卡洛估算器(DLMC),使用后验分布的拉普拉斯近似值的蒙特卡洛估算器(MCLA)和带拉普拉斯算子的双环蒙特卡洛估算器基于重要性抽样(DLMCIS)。结合使用随机梯度法和基于Laplace的估计器,我们可以使用昂贵且复杂的模型,例如需要求解偏微分方程(PDE)的模型。从理论上讲,我们得出一个明确的公式来计算蒙特卡洛方法的梯度估计量,包括MCLA和DLMCIS。从计算的角度来看,我们研究了四个示例:三个基于解析函数,一个使用有限元方法。最后一个示例是基于完整电极模型的电阻抗层析成像实验。在这些示例中,使用MCLA的加速随机梯度下降方法收敛到局部最大值,模型评估比使用DLMC的梯度下降少多达五个数量级。 (C)2020 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号