...
首页> 外文期刊>IEEE Transactions on Power Systems >Reinforcement learning for reactive power control
【24h】

Reinforcement learning for reactive power control

机译:无功功率控制的强化学习

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents a Reinforcement Learning (RL) method for network constrained setting of control variables. The RL method formulates the constrained load flow problem as a multistage decision problem. More specifically, the model-free learning algorithm (Q-learning) learns by experience how to adjust a closed-loop control rule mapping states (load flow solutions) to control actions (offline control settings) by means of reward values. Rewards are chosen to express how well control actions cause satisfaction of operating constraints. The Q-learning algorithm is applied to the IEEE 14 busbar and to the IEEE 136 busbar system for constrained reactive power control. The results are compared with those given by the probabilistic constrained load flow based on sensitivity analysis demonstrating the advantages and flexibility of the Q-learning algorithm. Computing times with another heuristic method is also compared.
机译:本文提出了一种用于网络约束控制变量设置的强化学习(RL)方法。 RL方法将约束潮流问题公式化为一个多阶段决策问题。更具体地说,无模型学习算法(Q学习)通过经验学习如何通过奖励值调整闭环控制规则映射状态(潮流解决方案)以控制动作(离线控制设置)。选择奖励来表达良好的控制行为如何满足操作约束。 Q学习算法适用于IEEE 14母线和IEEE 136母线系统,用于受限无功功率控制。将结果与基于敏感性分析的概率约束潮流给出的结果进行比较,证明了Q学习算法的优势和灵活性。还比较了另一种启发式方法的计算时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号