首页> 外文期刊>IEEE Transactions on Signal Processing >Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
【24h】

Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory

机译:约束非凸优化的并行和分布式方法-第一部分:理论

获取原文
获取原文并翻译 | 示例
           

摘要

In this two-part paper, we propose a general algorithmic framework for the minimization of a nonconvex smooth function subject to nonconvex smooth constraints, and also consider extensions to some structured, nonsmooth problems. The algorithm solves a sequence of (separable) strongly convex problems and maintains feasibility at each iteration. Convergence to a stationary solution of the original nonconvex optimization is established. Our framework is very general and flexible and unifies several existing successive convex approximation (SCA)-based algorithms. More importantly, and differently from current SCA approaches, it naturally leads to distributed and parallelizable implementations for a large class of nonconvex problems. This Part I is devoted to the description of the framework in its generality. In Part II, we customize our general methods to several (multiagent) optimization problems in communications, networking, and machine learning; the result is a new class of centralized and distributed algorithms that compare favorably to existing ad-hoc (centralized) schemes.
机译:在这个由两部分组成的论文中,我们提出了一个通用算法框架,该算法框架可在非凸平滑约束下使非凸平滑函数最小化,并考虑对某些结构化,非平滑问题的扩展。该算法解决了一系列(可分离的)强凸问题,并在每次迭代时保持了可行性。建立到原始非凸优化的固定解的收敛性。我们的框架非常通用且灵活,并统一了几种现有的基于连续凸逼近(SCA)的算法。更重要的是,与当前的SCA方法不同,它自然会导致针对大量非凸问题的分布式和可并行化实现。第一部分专门介绍框架的一般性。在第二部分中,我们针对各种在通信,网络和机器学习中的(多主体)优化问题定制了通用方法。结果是一类新型的集中式和分布式算法,与现有的即席(集中式)方案相比具有优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号