首页> 外文期刊>Network Science and Engineering, IEEE Transactions on >Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks
【24h】

Deep Reinforcement Learning Based Resource Management for Multi-Access Edge Computing in Vehicular Networks

机译:基于深度加强学习资源管理,用于车辆网络中的多址边缘计算

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we study joint allocation of the spectrum, computing, and storing resources in a multi-access edge computing (MEC)-based vehicular network. To support different vehicular applications, we consider two typical MEC architectures and formulate multi-dimensional resource optimization problems accordingly, which are usually with high computation complexity and overlong problem-solving time. Thus, we exploit reinforcement learning (RL) to transform the two formulated problems and solve them by leveraging the deep deterministic policy gradient (DDPG) and hierarchical learning architectures. Via off-line training, the network dynamics can be automatically learned and appropriate resource allocation decisions can be rapidly obtained to satisfy the quality-of-service (QoS) requirements of vehicular applications. From simulation results, the proposed resource management schemes can achieve high delay/QoS satisfaction ratios.
机译:在本文中,我们研究了基于多访问边缘计算(MEC)的车辆网络中的频谱,计算和存储资源的联合分配。为了支持不同的车辆应用,我们考虑两个典型的MEC架构,并相应地制定多维资源优化问题,通常具有高计算复杂度和重叠的问题解决时间。因此,我们利用强化学习(RL)来改造两个配方的问题并通过利用深度确定性政策梯度(DDPG)和分层学习架构来解决它们。通过离线培训,可以自动学习网络动态,可以快速获得适当的资源分配决策,以满足车辆应用的服务质量(QoS)要求。根据仿真结果,所提出的资源管理方案可以实现高延迟/ QoS满意度比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号