首页> 外文期刊>IEEE Transactions on Automatic Control >Convergence Rate of Distributed ADMM Over Networks
【24h】

Convergence Rate of Distributed ADMM Over Networks

机译:网络上分布式ADMM的收敛速度

获取原文
获取原文并翻译 | 示例
           

摘要

We propose a new distributed algorithm based on alternating direction method of multipliers (ADMM) to minimize sum of locally known convex functions using communication over a network. This optimization problem emerges in many applications in distributed machine learning and statistical estimation. Our algorithm allows for a general choice of the communication weight matrix, which is used to combine the iterates at different nodes. We show that when functions are convex, both the objective function values and the feasibility violation converge with rate O(1/T ), where T is the number of iterations. We then show that when functions are strongly convex and have Lipschitz continuous gradients, the sequence generated by our algorithm converges linearly to the optimal solution. In particular, an ε-optimal solution can be computed with O (√κ log(1/ε)) iterations, where κ is the condition number of the problem. Our analysis highlights the effect of network and communication weights on the convergence rate through degrees of the nodes, the smallest nonzero eigenvalue, and operator norm of the communication matrix.
机译:我们提出了一种新的基于乘法器交替方向方法(ADMM)的分布式算法,以使用网络通信将本地已知凸函数之和最小化。这种优化问题出现在分布式机器学习和统计估计的许多应用中。我们的算法允许一般选择通信权重矩阵,该矩阵用于组合不同节点上的迭代。我们表明,当函数是凸的时,目标函数值和可行性违反都以比率O(1 / T)收敛,其中T是迭代次数。然后我们表明,当函数是强凸的并且具有Lipschitz连续梯度时,我们的算法生成的序列线性收敛到最优解。特别是,可以通过O(√κlog(1 /ε))次迭代来计算ε最优解,其中κ是问题的条件数。我们的分析通过节点的度数,最小的非零特征值和通信矩阵的算子范数来强调网络和通信权重对收敛速度的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号