首页> 外文期刊>Statistics and computing >Scalable estimation strategies based on stochastic approximations: classical results and new insights
【24h】

Scalable estimation strategies based on stochastic approximations: classical results and new insights

机译:基于随机近似的可扩展估计策略:经典结果和新见解

获取原文
获取原文并翻译 | 示例
           

摘要

Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large datasets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.
机译:随机梯度方法可以促进对大量数据的估计,在随机梯度方法中,在每个步骤中使用小批数据顺序更新模型参数。在这里,我们回顾了早期工作和现代结果,这些结果说明了这些方法的统计特性,包括收敛速度,稳定性以及渐近偏差和方差。然后,我们概述了在这些方法有用的现代应用程序,从EM算法的在线版本到深度学习。根据这些结果,我们认为随机梯度方法有望成为大型数据集(尤其是稳定近端方法系列中的那些)的基准原则估计程序,例如隐式随机梯度下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号