首页> 外文期刊>IEEE Transactions on Automatic Control >On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization
【24h】

On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization

机译:非凸优化问题的分布式增广拉格朗日方法的收敛性

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we propose a distributed algorithm for optimization problems that involve a separable, possibly nonconvex objective function subject to convex local constraints and linear coupling constraints. The method is based on the accelerated distributed augmented Lagrangians (ADAL) algorithm that was recently developed by the authors to address convex problems. Here, we extend this line of work in two ways. First, we establish convergence of the method to a local minimum of the problem, using assumptions that are common in the analysis of nonconvex optimization methods. To the best of our knowledge, this is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems. Second, we propose a more general and decentralized rule to select the stepsizes of the method. This improves on the authors' original ADAL method, where the stepsize selection used global information at initialization. Numerical results are included to verify the correctness and efficiency of the proposed distributed method.
机译:在本文中,我们提出了一种针对优化问题的分布式算法,该算法涉及受凸局部约束和线性耦合约束约束的可分离的,可能为非凸的目标函数。该方法基于作者最近开发的用于解决凸问题的加速分布式增强拉格朗日(ADAL)算法。在这里,我们以两种方式扩展了这一工作范围。首先,我们使用非凸优化方法分析中常见的假设,将方法收敛到问题的局部最小值。据我们所知,这是第一篇专门针对应用于非凸优化问题的分布式增广拉格朗日(AL)方法收敛到局部极小值的工作。众所周知,分布式AL方法在解决凸问题时表现良好。其次,我们提出了一个更通用和分散的规则来选择方法的步骤大小。这改进了作者最初的ADAL方法,该方法在初始化时逐步选择使用了全局信息。数值结果被包括在内,以验证所提出的分布式方法的正确性和效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号