首页> 外文会议>IEEE Annual Conference on Decision and Control >On Improving the Robustness of Reinforcement Learning-based Controllers using Disturbance Observer
【24h】

On Improving the Robustness of Reinforcement Learning-based Controllers using Disturbance Observer

机译:用干扰观察者提高加固基于学习控制器的鲁棒性

获取原文

摘要

Because reinforcement learning (RL) may cause issues in stability and safety when directly applied to physical systems, a simulator is often used to learn a control policy. However, the control performance may be easily deteriorated in a real plant due to the discrepancy between the simulator and the plant. In this paper, we propose an idea to enhance the robustness of such RL-based controllers by utilizing the disturbance observer (DOB). This method compensates for the mismatch between the plant and simulator, and rejects disturbance to maintain the nominal performance while guaranteeing robust stability. Furthermore, the proposed approach can be applied to partially observable systems. We also characterize conditions under which the learned controller has a provable performance bound when connected to the physical system.
机译:由于强化学习(RL)可能在直接应用于物理系统时造成稳定性和安全性的问题,因此通常用于学习控制策略的模拟器。然而,由于模拟器和工厂之间的差异,对实际工厂的控制性能可以很容易地恶化。在本文中,我们提出了一种想法,通过利用干扰观察者(DOB)来增强基于RL的控制器的稳健性。该方法补偿了植物和模拟器之间的不匹配,并拒绝干扰,以保持标称性能,同时保证稳健的稳定性。此外,所提出的方法可以应用于部分可观察的系统。我们还表征了学习控制器在连接到物理系统时具有可提供性能的条件。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号