首页> 外文会议>IEEE International Conference on Distributed Computing Systems >SNAP: A Communication Efficient Distributed Machine Learning Framework for Edge Computing
【24h】

SNAP: A Communication Efficient Distributed Machine Learning Framework for Edge Computing

机译:SNAP:用于边缘计算的通信高效分布式机器学习框架

获取原文

摘要

More and more applications learn from the data collected by the edge devices. Conventional learning methods, such as gathering all the raw data to train an ultimate model in a centralized way, or training a target model in a distributed manner under the parameter server framework, suffer a high communication cost. In this paper, we design Select Neighbors and Parameters (SNAP), a communication efficient distributed machine learning framework, to mitigate the communication cost. A distinct feature of SNAP is that the edge servers act as peers to each other. Specifically, in SNAP, every edge server hosts a copy of the global model, trains it with the local data, and periodically updates the local parameters based on the weighted sum of the parameters from its neighbors (i.e., peers) only (i.e., without pulling the parameters from all other edge servers). Different from most of the previous works on consensus optimization in which the weight matrix to update parameter values is predefined, we propose a scheme to optimize the weight matrix based on the network topology, and hence the convergence rate can be improved. Another key idea in SNAP is that only the parameters which have been changed significantly since the last iteration will be sent to the neighbors. Both theoretical analysis and simulations show that SNAP can achieve the same accuracy performance as the centralized training method. Compared to the state-of-the-art communication-aware distributed learning scheme TernGrad, SNAP incurs a significantly lower (99.6% lower) communication cost.
机译:越来越多的应用从边缘设备收集的数据中学习。传统的学习方法,比如收集所有的原始数据进行集中训练的终极模式,或者参数服务器框架下以分布式的方式培养的目标模式,受较高的通信成本。在本文中,我们设计选择邻居和参数(SNAP),通信高效的分布式机器学习框架,减轻了通信成本。 SNAP的一个显着特征是,所述边缘服务器充当对等体到彼此。具体而言,在SNAP,每一个边缘服务器主机的全球模型的副本,与本地数据串它,并定期更新基础上的参数从它的邻居(即,同行)的加权和局部参数只(即不从所有其它边缘服务器拉动参数)。大多数在权重矩阵来更新参数值是预先定义的共识优化之前的作品不同的是,我们提出了一个方案,以优化基于网络拓扑的权重矩阵,因此收敛速度可以得到改善。在SNAP另一个关键的想法是,只有已自上次迭代显著改变的参数将被发送到邻居。理论分析和仿真结果表明,SNAP可以达到同样的精度性能的集中培训方法。的状态相比的最先进的通信感知分布式学习方案TernGrad,SNAP招致显著较低(99.6%降低)的通信成本。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号