首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
【24h】

Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data

机译:来自非I.I.D的强大和通信高效的联合学习。数据

获取原文
获取原文并翻译 | 示例
           

摘要

Federated learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning, however, comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods, however, are only of limited utility in the federated learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions, such as i.i.d. distribution of the client data, which typically cannot be found in federated learning. In this article, we propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment. STC extends the existing compression technique of top-k gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms federated averaging in common federated learning scenarios. These results advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments.
机译:联合学习允许多方联合在其组合数据上联合培训深度学习模型,而没有任何参与者必须将其本地数据展示给集中式服务器。然而,这种形式的保留了培训期间具有重要通信开销的成本。为了解决这个问题,在分布式训练文献中提出了几种压缩方法,可以将所需通信量减少到最多三个数量级。然而,这些现有方法仅在联合学习设置中的有限效用,因为它们只需将来自客户端的上游通信压缩到服务器(离开下游通信未压缩),或者仅在理想化条件下执行良好,例如i.i.d.客户数据的分发,通常无法在联合学习中找到。在本文中,我们提出了稀疏的三元压缩(STC),这是一个专门用于满足联合学习环境要求的新压缩框架。 STC扩展了顶-K梯度稀疏的现有压缩技术,具有新颖的机制,以实现下游压缩以及重量更新的Ternarization和最佳Golomb编码。我们对四种不同学习任务的实验表明,STC在普通联合学习场景中的联合平均值明显优异。这些结果倡导联合优化对高频低位通信的范式转变,特别是在带宽约束的学习环境中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号