首页> 外文会议>IEEE International Conference on Systems, Man, and Cybernetics >Temporal and agent abstractions in multiagent reinforcement learning
【24h】

Temporal and agent abstractions in multiagent reinforcement learning

机译:多主体强化学习中的时间和主体抽象

获取原文

摘要

A major challenge in the area of multiagent reinforcement learning has been addressing the problem of scale, more specifically the fact that increasing the number of agents in a system dramatically increases both the cost of representing the problem and the cost of calculating a solution. In single agent systems, temporal abstractions in the form of options have been used to address part of the scaling problem, but only limited work exists for multiagent systems, largely limited to cooperative games. This paper presents a formalization of options for multiagent systems and introduces a framework for agent abstraction that treats coalitions executing options analogously to agents with policies, resulting in a lower-dimensional game whose equilibria approximately correspond to equilibria in the higher dimensional game.
机译:多主体强化学习领域的一个主要挑战是解决规模问题,更具体地讲,增加系统中主体的数量会大大增加代表问题的成本和计算解决方案的成本。在单代理系统中,已采用选项形式的时间抽象来解决部分缩放问题,但对于多代理系统,只有有限的工作存在,主要限于合作博弈。本文介绍了多智能体系统选项的形式化,并介绍了一种用于智能体抽象的框架,该框架类似于策略将智能体执行选项的联盟视为智能体,从而产生了一个低维博弈,其均衡性近似对应于高维博弈的均衡性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号