首页>
外国专利>
An Effective Primal-Dual Stochastic Distributed Strategy for Large-Scale Machine Learning
An Effective Primal-Dual Stochastic Distributed Strategy for Large-Scale Machine Learning
展开▼
机译:大规模机器学习的一种有效的原-对偶随机分布策略
展开▼
页面导航
摘要
著录项
相似文献
摘要
#$%^&*AU2019101711A420200213.pdf#####Abstract In this patent, we propose a novel primal-dual stochastic distributed algorithm for convex optimization problem with private set constraints in multi-agent networked system, where overall nodes aim at collectively minimizing the sum of all local objective functions. Motivated by a variety of applications in machine learning problems with large-scale training sets distributed to multiple autonomous nodes, each local objective function is further designed as the average of moderate number of local instantaneous functions. The algorithm mainly comprises four stages including setting related parameters and initial values of variables; computing stochastic gradients; exchanging information; updating variables. The algorithm set forth in the present invention performs update of each node's state by resorting to unbiased stochastic averaging gradients and projection techniques. Specifically, for each node the gradient of one local instantaneous function selected randomly is evaluated and the average of the most recent stochastic gradients is used to approximate the true local gradient at each iteration. Therefore, the algorithm can significantly reduce the evaluation cost of gradients of local objective functions and improve the computation efficiency in modern large-scale information processing problem, especially with high dimension.4/4 2 - EXTRA - DSA S- - The proposed algorithm •.•••• DGD with a constant step size * - - DGD with a diminishing step size -2 - - - - - - - -_0 C) -6 -8 -10' 0 2 4 6 8 10 simulation time (second) Figure 5 (a) g1 (b) 92 Figure 6
展开▼