首页> 外文会议>International Conference on Electronics, Information, and Communication >DQR: A Deep Reinforcement Learning-based QoS Routing Protocol in Cognitive Radio Mobile Ad Hoc Networks
【24h】

DQR: A Deep Reinforcement Learning-based QoS Routing Protocol in Cognitive Radio Mobile Ad Hoc Networks

机译:DQR:在认知无线电移动临时网络中的基于深度加强学习的QoS路由协议

获取原文

摘要

In this paper, we propose a novel deep reinforcement learning-based quality-of-service routing (DQR) protocol to establish the best route with minimum end-to-end queuing delay subject to the number of hops constraint in cognitive mobile ad hoc networks (CRAHNs). In forwarding RREQ process, based on the proposed deep reinforcement learning (DRL) model, the DQR protocol unicasts a RREQ packet to its neighbor with minimum cost-value which avoids the affected region of the primary user to save control overheads, queuing delay and routing delay. The simulation results show that the DQR protocol outperforms the AODV one in terms of control overhead, PDR, and delay, suggesting a real-time protocol in CRAHNs.
机译:在本文中,我们提出了一种新的深度加强基于学习的服务质量路由(DQR)协议,以建立具有最小端到端排队延迟的最佳路线,其认知移动临时网络中的啤酒花限制的数量受到啤酒花限制(克拉恩)。在转发RREQ进程中,基于所提出的深度加强学习(DRL)模型,DQR协议将RREQ分组单播到其邻居,其最小成本值避免了主要用户的受影响区域,以节省控件开销,排队延迟和路由延迟。仿真结果表明,DQR协议在控制开销,PDR和延迟方面优于AODV一个,表明Crahns中的实时协议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号