...
首页> 外文期刊>Quality Control, Transactions >Reinforcement Learning-Based Resource Management Model for Fog Radio Access Network Architectures in 5G
【24h】

Reinforcement Learning-Based Resource Management Model for Fog Radio Access Network Architectures in 5G

机译:基于强化学习的雾无线电接入网络架构资源管理模型5G

获取原文
获取原文并翻译 | 示例
           

摘要

The need to cope with the continuously growing number of connected users and the increased demand for mobile broadband services in the Internet of Things has led to the notion of introducing the fog computing paradigm in fifth generation (5G) mobile networks in the form of fog radio access network (F-RAN). The F-RAN approach emphasises bringing the computation capability to the edge of the network so as to reduce network bottlenecks and improve latency. However, despite the potential, the management of computational resources remains a challenge in F-RAN architectures. Thus, this paper aims to overcome the shortcomings of conventional approaches to computational resource allocation in F-RANs. Reinforcement learning (RL) is presented as a method for dynamic and autonomous resource allocation, and an algorithm is proposed based on Q-learning. RL has several benefits in resource allocation problems and simulations carried out show that it outperforms reactive methods. Furthermore, the results show that the proposed algorithm improves latency and thus has the potential to have a major impact in 5G applications, particularly the Internet of Things.
机译:需要应对连续越来越多的关联用户数量和互联网上的移动宽带服务的需求增加导致了在雾无线电形式中引入第五代(5G)移动网络中的雾计算范例的概念访问网络(F-RAN)。 F-RAN方法强调将计算能力带到网络边缘,以减少网络瓶颈并提高延迟。然而,尽管有潜力,计算资源管理仍然是F-RAN架构的挑战。因此,本文旨在克服F-RANS中常规方法对计算资源分配的缺点。提出了强化学习(RL)作为动态和自主资源分配的方法,并且基于Q学习提出了一种算法。 RL在资源分配问题中有几个好处,并进行了模拟表明它优于反应方法。此外,结果表明,该算法提高了延迟,因此有可能在5G应用中产生重大影响,特别是物联网。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号