...
【24h】

Asynchronous Gradient Push

机译:异步梯度按钮

获取原文
获取原文并翻译 | 示例
           

摘要

We consider a multiagent framework for distributed optimization where each agent has access to a local smooth strongly convex function, and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents. When the local functions are strongly convex with Lipschitz-continuous gradients, we show that the iterates at each agent converge to a neighborhood of the global minimum, where the neighborhood size depends on the degree of asynchrony in the multiagent network. When the agents work at the same rate, convergence to the global minimizer is achieved. Numerical experiments demonstrate that asynchronous gradient push can minimize the global objective faster than the state-of-the-art synchronous first-order methods, is more robust to failing or stalling agents, and scales better with the network size.
机译:我们考虑一个用于分布式优化的多层框架,其中每个代理都可以访问局部平滑强凸函数,并且集体目标是在最小化代理本地功能的总和的参数上实现共识。我们提出了一种算法,其中每个代理的异步和独立于其他代理操作。当本地函数强烈地凸出leipschitz-constall梯度时,我们显示每个代理的迭代会聚到全局最小值的邻域,其中邻域大小取决于多台网络中的异步程度。当代理以相同的速率工作时,实现了全球最小化器的收敛。数值实验表明,异步梯度推动可以比最先进的同步一阶方法更快地最小化全局目标,对故障或停机代理更加坚固,并且通过网络尺寸更好地缩放。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号