首页> 外文会议>2011 IEEE International Conference on Development and Learning >Scalable reinforcement learning through hierarchical decompositions for weakly-coupled problems
【24h】

Scalable reinforcement learning through hierarchical decompositions for weakly-coupled problems

机译:通过对弱耦合问题的层次分解进行可扩展的强化学习

获取原文

摘要

Reinforcement Learning, or Reward-Dependent Learning, has been very successful at describing how animals and humans adjust their actions so as to increase their gains and reduce their losses in a wide variety of tasks. Empirical studies have furthermore identified numerous neuronal correlates of quantities necessary for such computations. But, in general it is too expensive for the brain to encode actions and their outcomes with respect to all available dimensions describing the state of the world. This suggests the existence of learning algorithms that are capable of taking advantage of the independencies present in the world and hence reducing the computational costs in terms of representations and learning. A possible solution is to use separate learners for task dimensions with independent dynamics and rewards. But the condition of independence is usually too restrictive. Here, we propose a hierarchical reinforcement learning solution for the more general case in which the dynamics are not independent but weakly coupled and show how to assign credit to the different modules, which solve the task jointly.
机译:强化学习或奖励依赖型学习在描述动物和人类如何调整行为以增加其收益和减少其在各种任务中的损失方面非常成功。经验研究还确定了这种计算所需数量的众多神经元相关性。但是,总的来说,对于描述世界状态的所有可用维度而言,大脑对动作及其结果进行编码都太昂贵了。这表明存在能够利用世界上存在的独立性并因此降低了表示和学习方面的计算成本的学习算法。一种可能的解决方案是使用具有独立动态和奖励的任务规模的单独学习者。但是独立的条件通常过于严格。在这里,我们为动力学不是独立而是弱耦合的更一般的情况提出了一种分层的强化学习解决方案,并展示了如何为不同的模块分配学分,从而共同解决任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号