...
首页> 外文期刊>Journal of Sensors >Reinforcement Learning Guided by Double Replay Memory
【24h】

Reinforcement Learning Guided by Double Replay Memory

机译:由双重重播内存引导的强化学习

获取原文
           

摘要

Experience replay memory in reinforcement learning enables agents to remember and reuse past experiences. Most of the reinforcement models are subject to single experience replay memory to operate agents. In this article, we propose a framework that accommodates doubly used experience replay memory, exploiting both important transitions and new transitions simultaneously. In numerical studies, the deep - networks (DQN) equipped with double experience replay memory are examined under various scenarios. A self-driving car requires an automated agent to figure out when to adequately change lanes on the real-time basis. To this end, we apply our proposed agent to the simulation of urban mobility (SUMO) experiments. Besides, we also verify its applicability to reinforcement learning whose action space is discrete (e.g., computer game environments). Taken all together, we conclude that the proposed framework outperforms priorly known reinforcement learning models in the virtue of double experience replay memory.
机译:体验重放内存在加固学习中使代理商能够记住并重复使用过去的经历。大多数钢筋模型受到单一体验重放内存以进行操作代理。在本文中,我们提出了一个框架,可容纳双倍使用的体验重放内存,同时利用重要的转换和新转换。在数值研究中,在各种场景下检查配备有双重体验重放内存的深网络(DQN)。自动驾驶汽车需要自动化的代理来弄清楚何时在实时更改车道。为此,我们将拟议的代理应用于城市移动性(SUMO)实验的模拟。此外,我们还验证了其对加强学习的适用性,其行动空间是离散的(例如,计算机游戏环境)。我们共同采取得出结论,拟议的框架优于正常的已知加强学习模型,其德双重体验重放内存。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号