首页> 外文期刊>Journal of network and computer applications >End-to-end CNN-based dueling deep Q-Network for autonomous cell activation in Cloud-RANs
【24h】

End-to-end CNN-based dueling deep Q-Network for autonomous cell activation in Cloud-RANs

机译:基于端到端的CNN系列Dueling Deep Q网络,用于云Rans中的自主细胞激活

获取原文
获取原文并翻译 | 示例
       

摘要

The fifth generation (5G) technology is expected to support a rapid increase in infrastructure and mobile user subscriptions with an increase in the number of remote radio heads (RRHs) per unit area using cloud radio access networks (C-RANs). From the economic point of view, minimizing the amount of energy consumption of the RRHs is a challenging issue. From the environmental point of view, achieving "greenness" in wireless networks is one of the many goals of telecommunication operators. This paper proposes a framework to balance the energy consumption of RRHs and quality of service (QoS) satisfaction of users in cellular networks using a convolutional neural network (CNN)-based relational dueling deep Q-Network (DQN) scheme. Firstly, we formulate the cell activation/deactivation problem as a Markov decision process (MDP) and set up a two-layer CNN which takes raw captured images in the environment as its input. Then, we develop a dueling DQN-based autonomous cell activation scheme to dynamically turn RRHs on or off based on the energy consumption and QoS requirements of users in the network. Finally, we decouple a customized physical resource allocation for rate-constrained users and delay-constrained users from the cell activation scheme and formulate the problem as a convex optimization problem to ensure the QoS requirements of users are achieved with the minimum number of active RRHs under varying traffic conditions. Extensive simulations reveal that the proposed algorithm achieves faster rate of convergence than nature DQN, Q-learning and dueling DQN schemes. Our algorithm also achieves stability in mobility scenarios compared with DQN and dueling DQN without CNN. We also observe a slight improvement in balancing energy consumption and QoS satisfaction compared with DQN and dueling DQN schemes.
机译:预计第五代(5G)技术将支持基础设施和移动用户订阅的快速增加,该用户订阅随着使用云无线电接入网络(C-RAN)的每单位区域的远程无线电头(RRHS)的数量增加。从经济的角度来看,最小化RRH的能量消耗量是一个具有挑战性的问题。从环境的角度来看,在无线网络中实现“绿色”是电信运营商的众多目标之一。本文建议使用卷积神经网络(CNN)基于关系Dueling Deep Q-Netwnet(DQN)方案来平衡蜂窝网络中的RRHS和服务质量(QoS)满意度(QoS)满意度的框架。首先,我们将小区激活/去激活问题标记为Markov决策过程(MDP),并设置一个双层CNN,其将环境中的RAW捕获图像作为其输入。然后,我们开发了一种决斗的基于DQN的自主单元激活方案,根据网络中用户的能量消耗和QoS要求动态转动RRHS打开或关闭。最后,我们将定制的物理资源分配与速率约束用户和延迟约束用户从小区激活方案中分离,并将问题作为凸优化问题,以确保使用最小数量的活动RRH实现用户的QoS要求不同的交通状况。广泛的模拟表明,所提出的算法比自然DQN,Q-Learning和Dueling DQN方案实现更快的收敛速度。我们的算法还可以实现流动性方案的稳定性,而DQN和Dueling DQN没有CNN。与DQN和Dueling DQN方案相比,我们还观察了平衡能量消耗和QoS满意度的略微改善。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号