首页> 外文会议>Conference on sensing and analysis technologies for biomedical and cognitive applications >Convergence Rates of Finite Difference Stochastic Approximation Algorithms Part Ⅱ: Implementation via Common Random Numbers
【24h】

Convergence Rates of Finite Difference Stochastic Approximation Algorithms Part Ⅱ: Implementation via Common Random Numbers

机译:有限差分随机近似算法的收敛速率Ⅱ:通过常见随机数实现

获取原文

摘要

Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n~(-2/5)in general and to n~(-1//2), the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
机译:随机优化是一个基本问题,在包括生物和认知科学的许多领域找到应用。迭代随机优化的经典随机近似算法需要在实践中通常难以获得的样本对象功能的梯度信息。最近,随机优化的衍生方法中已经有利息。在本文中,通过使用通过常见随机数产生的有限差异来检查Kiefer-WolfoItz算法和镜像缩减算法的收敛速度。结果表明,通过控制有限差异的实现,可以加速这些算法的汇聚。特别地,显示速率通常可以增加至n〜(-2/5),并以n〜(-1 // 2),是一个广泛阶级的Monte Carlo优化的随机近似的最佳速率在迭代号中的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号