首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Communication-Efficient Federated Learning With Compensated Overlap-FedAvg
【24h】

Communication-Efficient Federated Learning With Compensated Overlap-FedAvg

机译:通过补偿重叠-Fedivg的通信高效联合学习

获取原文
获取原文并翻译 | 示例
           

摘要

While petabytes of data are generated each day by a number of independent computing devices, only a few of them can be finally collected and used for deep learning (DL) due to the apprehension of data security and privacy leakage, thus seriously retarding the extension of DL. In such a circumstance, federated learning (FL) was proposed to perform model training by multiple clients' combined data without the dataset sharing within the cluster. Nevertheless, federated learning with periodic model averaging (FedAvg) introduced massive communication overhead as the synchronized data in each iteration is about the same size as the model, and thereby leading to a low communication efficiency. Consequently, variant proposals focusing on the communication rounds reduction and data compression were proposed to decrease the communication overhead of FL. In this article, we propose Overlap-FedAvg, an innovative framework that loosed the chain-like constraint of federated learning and paralleled the model training phase with the model communication phase (i.e., uploading local models and downloading the global model), so that the latter phase could be totally covered by the former phase. Compared to vanilla FedAvg, Overlap-FedAvg was further developed with a hierarchical computing strategy, a data compensation mechanism, and a nesterov accelerated gradients (NAG) algorithm. In Particular, Overlap-FedAvg is orthogonal to many other compression methods so that they could be applied together to maximize the utilization of the cluster. Besides, the theoretical analysis is provided to prove the convergence of the proposed framework. Extensive experiments conducting on both image classification and natural language processing tasks with multiple models and datasets also demonstrate that the proposed framework substantially reduced the communication overhead and boosted the federated learning process.
机译:虽然每天由许多独立的计算设备生成的Petabytes,但由于数据安全性和隐私泄漏的忧虑,只能收集其中的一些,以用于深度学习(DL),从而严重延迟扩展DL。在这种情况下,提出联合学习(FL)以通过在群集中的数据集共享的情况下通过多个客户端的组合数据进行模型培训。然而,具有周期性模型的联合学习平均(FEDAVG)引入了大规模的通信开销,因为每次迭代中的同步数据大致与模型相同,从而导致低通信效率。因此,提出了专注于通信圆形减少和数据压缩的变体提案,以降低FL的通信开销。在本文中,我们提出了一个创新的框架,这是一个创新的框架,它可以使用模型通信阶段(即,上传本地模型和下载全局模型并下载模型训练阶段的链式限制。后阶段可以完全被前阶段覆盖。与Vanilla Fedivg相比,通过分层计算策略,数据补偿机制和Nesterov加速梯度(NAG)算法进一步开发重叠FedAVG。特别地,重叠 - FedavG与许多其他压缩方法正交,使得它们可以一起应用以最大化群集的利用率。此外,提供了理论分析来证明所提出的框架的收敛性。对具有多个模型和数据集的图像分类和自然语言处理任务进行的广泛实验还证明所提出的框架基本上降低了通信开销并提升了联合学习过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号