首页> 外文学位 >Distributed optimization in multi-agent systems: Applications to distributed regression.
【24h】

Distributed optimization in multi-agent systems: Applications to distributed regression.

机译:多主体系统中的分布式优化:应用于分布式回归。

获取原文
获取原文并翻译 | 示例

摘要

The context for this work is cooperative multi-agent systems (MAS). An agent is an intelligent entity that can measure some aspect of its environment, process information and possibly influence the environment through its action. A cooperative MAS can be defined as a loosely coupled network of agents that interact and cooperate to solve problems that are beyond the individual capabilities or knowledge of each agent.;The focus of this thesis is distributed stochastic optimization in multi-agent systems. In distributed optimization, the complete optimization problem is not available at a single location but is distributed among different agents. The distributed optimization problem is additionally stochastic when the information available to each agent is with stochastic errors. Communication constraints, lack of global information about the network topology and the absence of coordinating agents make it infeasible to collect all the information at a single location and then treat it as a centralized optimization problem. Thus, the problem has to be solved using algorithms that are distributed, i.e., different parts of the algorithm are executed at different agents, and local, i.e., each agent uses only information locally available to it and other information it can obtain from its immediate neighbors.;In this thesis, we will primarily focus on the specific problem of minimizing a sum of functions over a constraint set, when each component function is known partially (with stochastic errors) to a unique agent. The constraint set is known to all the agents. We propose three distributed and local algorithms, establish asymptotic convergence with diminishing stepsizes and obtain rate of convergence results. Stochastic errors, as we will see, arise naturally when the objective function known to an agent has a random variable with unknown statistics. Additionally, stochastic errors also model communication and quantization errors. The problem is motivated by distributed regression in sensor networks and power control in cellular systems.;We also discuss an important extension to the above problem. In the extension, the network goal is to minimize a global function of a sum of component functions over a constraint set. Each component function is known to a unique network agent. The global function and the constraint set are known to all the agents. Unlike the previous problem, this problem is not stochastic. However, the objective function in this problem is more general. We propose an algorithm to solve this problem and establish its convergence.
机译:这项工作的背景是协作多代理系统(MAS)。代理是一个智能实体,可以衡量其环境的某些方面,处理信息并可能通过其行为影响环境。协作式MAS可以定义为一种松散耦合的Agent网络,它们相互作用和协作以解决超出每个Agent的个人能力或知识范围之外的问题。本论文的重点是多Agent系统中的分布式随机优化。在分布式优化中,完整的优化问题不在单个位置出现,而是分布在不同的代理之间。当每个代理可用的信息具有随机错误时,分布式优化问题也是随机的。通信限制,缺少有关网络拓扑的全局信息以及缺少协调代理,使得在单个位置收集所有信息然后将其视为集中式优化问题是不可行的。因此,必须使用分布式算法来解决该问题,即算法的不同部分在不同的代理程序上执行,并且是局部的,即每个代理程序仅使用本地可用的信息以及可以从其直接获取的其他信息。在本文中,当每个组件函数部分地(具有随机错误)被唯一的主体所知时,我们将主要关注最小化约束集上的函数总和的特定问题。约束集对所有代理都是已知的。我们提出了三种分布式和局部算法,建立渐近渐近收敛,步长逐渐减小,并获得收敛速度。正如我们将看到的,随机误差会自然地出现,当一个智能体已知的目标函数具有一个统计未知的随机变量时。另外,随机误差还可以对通信和量化误差进行建模。该问题是由传感器网络中的分布式回归和蜂窝系统中的功率控制引起的;我们还讨论了上述问题的重要扩展。在扩展中,网络目标是在约束集上最小化组件函数之和的全局函数。每个组件功能对于唯一的网络代理都是已知的。全局功能和约束集对于所有代理都是已知的。与先前的问题不同,该问题不是随机的。但是,此问题中的目标函数较为笼统。我们提出一种算法来解决这个问题并建立其收敛性。

著录项

  • 作者

    Srinivasan, Sundhar Ram.;

  • 作者单位

    University of Illinois at Urbana-Champaign.;

  • 授予单位 University of Illinois at Urbana-Champaign.;
  • 学科 Engineering Electronics and Electrical.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 141 p.
  • 总页数 141
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号