首页> 外文会议>Sensing and analysis technologies for biomedical and cognitive applications 2016 >Convergence Rates of Finite Difference Stochastic Approximation Algorithms Part Ⅱ: Implementation via Common Random Numbers
【24h】

Convergence Rates of Finite Difference Stochastic Approximation Algorithms Part Ⅱ: Implementation via Common Random Numbers

机译:有限差分随机逼近算法的收敛速度第二部分:通用随机数的实现

获取原文
获取原文并翻译 | 示例

摘要

Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n~(-2/5)in general and to n~(-1//2), the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
机译:随机优化是一个基本问题,已在包括生物学和认知科学在内的许多领域得到应用。用于迭代随机优化的经典随机逼近算法需要样本对象函数的梯度信息,这在实践中通常很难获得。最近,人们对无导数随机优化方法有了新的兴趣。在本文中,我们通过使用常见随机数生成的有限差分来近似梯度,从而研究了Kiefer-Wolfowitz算法和镜像下降算法的收敛速度。结果表明,通过控制有限差分的实现可以加快这些算法的收敛速度。特别地,表明在宽泛类的蒙特卡洛优化中,该速率通常可以增加到n〜(-2/5),并且可以增加到n〜(-1 // 2),即最佳的随机近似率。的问题,在迭代次数n中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号