首页> 外文期刊>Frontiers in Applied Mathematics and Statistics >A Fixed-Point of View on Gradient Methods for Big Data
【24h】

A Fixed-Point of View on Gradient Methods for Big Data

机译:大数据梯度方法的固定观点

获取原文
           

摘要

Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning for massive datasets (big data). In particular, stochastic gradient methods are considered the defacto standard for training deep neural networks. Studying gradient methods within the realm of fixed-point theory provides us with powerful tools to analyze their convergence properties. In particular, gradient methods using inexact or noisy gradients, such as stochastic gradient descent, can be studied conveniently using well-known results on inexact fixed-point iterations. Moreover, as we demonstrate in this paper, the fixed-point approach allows an elegant derivation of accelerations for basic gradient methods. In particular, we will show how gradient descent can be accelerated by a fixed-point preserving transformation of an operator associated with the objective function.
机译:将梯度方法解释为定点迭代,我们提供了对那些用于最小化凸目标函数的方法的详细分析。由于其概念和算法简单性,梯度方法广泛用于大型数据集(大数据)的机器学习中。特别地,随机梯度方法被认为是用于训练深度神经网络的事实上的标准。在定点理论领域内研究梯度方法,为我们提供了分析其收敛特性的强大工具。尤其是,可以使用不精确的定点迭代中的众所周知的结果方便地研究使用不精确或嘈杂的梯度的梯度方法,例如随机梯度下降。而且,正如我们在本文中论证的那样,定点方法允许对基本梯度方法的加速度进行优雅的推导。特别是,我们将展示如何通过与目标函数相关的算子的定点保留变换来加速梯度下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号