...
首页> 外文期刊>Smart Grid, IEEE Transactions on >Deep Reinforcement Learning for Strategic Bidding in Electricity Markets
【24h】

Deep Reinforcement Learning for Strategic Bidding in Electricity Markets

机译:电力市场战略招标的深度强化学习

获取原文
获取原文并翻译 | 示例
           

摘要

Bi-level optimization and reinforcement learning (RL) constitute the state-of-the-art frameworks for modeling strategic bidding decisions in deregulated electricity markets. However, the former neglects the market participants' physical non-convex operating characteristics, while conventional RL methods require discretization of state and/or action spaces and thus suffer from the curse of dimensionality. This paper proposes a novel deep reinforcement learning (DRL) based methodology, combining a deep deterministic policy gradient (DDPG) method with a prioritized experience replay (PER) strategy. This approach sets up the problem in multi-dimensional continuous state and action spaces, enabling market participants to receive accurate feedback regarding the impact of their bidding decisions on the market clearing outcome, and devise more profitable bidding decisions by exploiting the entire action domain, also accounting for the effect of non-convex operating characteristics. Case studies demonstrate that the proposed methodology achieves a significantly higher profit than the alternative state-of-the-art methods, and exhibits a more favourable computational performance than benchmark RL methods due to the employment of the PER strategy.
机译:双层优化和强化学习(RL)构成了用于在放松管制的电力市场中对战略招标决策进行建模的最新框架。但是,前者忽略了市场参与者的物理非凸操作特性,而传统的RL方法要求离散状态和/或动作空间,因此遭受了维数的诅咒。本文提出了一种新颖的基于深度强化学习(DRL)的方法,将深度确定性策略梯度(DDPG)方法与优先体验重放(PER)策略相结合。这种方法在多维连续状态和操作空间中设置了问题,使市场参与者能够收到有关其出价决策对市场清算结果的影响的准确反馈,并通过利用整个操作域来设计更有利可图的出价决策,考虑非凸操作特性的影响。案例研究表明,由于采用了PER策略,因此与替代的最新技术相比,该方法具有更高的利润,并且比基准RL方法具有更好的计算性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号