首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Stochastic Conjugate Gradient Algorithm With Variance Reduction
【24h】

Stochastic Conjugate Gradient Algorithm With Variance Reduction

机译:减少方差的随机共轭梯度算法

获取原文
获取原文并翻译 | 示例
           

摘要

Conjugate gradient (CG) methods are a class of important methods for solving linear equations and nonlinear optimization problems. In this paper, we propose a new stochastic CG algorithm with variance reduction(1) and we prove its linear convergence with the Fletcher and Reeves method for strongly convex and smooth functions. We experimentally demonstrate that the CG with variance reduction algorithm converges faster than its counterparts for four learning models, which may be convex, nonconvex or nonsmooth. In addition, its area under the curve performance on six large-scale data sets is comparable to that of the LIBLINEAR solver for the L2-regularized L2-loss but with a significant improvement in computational efficiency.
机译:共轭梯度法(CG)是解决线性方程和非线性优化问题的一类重要方法。在本文中,我们提出了一种新的具有方差减少的随机CG算法(1),并使用Fletcher和Reeves方法证明了其对强凸函数和光滑函数的线性收敛性。我们实验证明,对于四个学习模型,具有方差减少算法的CG收敛快于其对应的CG,它们可能是凸的,非凸的或不平滑的。此外,它在六个大型数据集上的曲线性能下的面积与L2正则化L2损失的LIBLINEAR解算器相当,但计算效率有了显着提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号