首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Distributed Task Migration Optimization in MEC by Extending Multi-Agent Deep Reinforcement Learning Approach
【24h】

Distributed Task Migration Optimization in MEC by Extending Multi-Agent Deep Reinforcement Learning Approach

机译:通过扩展多功能深度增强学习方法MEC分布式任务迁移优化

获取原文
获取原文并翻译 | 示例
           

摘要

Closer to mobile users geographically, mobile edge computing (MEC) can provide some cloud-like capabilities to users more efficiently. This enables it possible for resource-limited mobile users to offload their computation-intensive and latency-sensitive tasks to MEC nodes. For its great benefits, MEC has drawn wide attention and extensive works have been done. However, few of them address task migration problem caused by distributed user mobility, which can't be ignored with quality of service (QoS) consideration. In this article, we study task migration problem and try to minimize the average completion time of tasks under migration energy budget. There are multiple independent users and the movement of each mobile user is memoryless with a sequential decision-making process, thus reinforcement learning algorithm based on Markov chain model is applied with low computation complexity. To further facilitate cooperation among users, we devise a distributed task migration algorithm based on counterfactual multi-agent (COMA) reinforcement learning approach to solve this problem. Extensive experiments are carried out to assess the performance of this distributed task migration algorithm. Compared with no migrating (NM) and single-agent actor-critic (AC) algorithms, the proposed distributed task migration algorithm can achieve up 30-50 percent reduction about average completion time.
机译:地理位置上更接近移动用户,移动边缘计算(MEC)可以更有效地为用户提供一些类似的云功能。这使得资源限制移动用户可以将其计算密集型和延迟敏感的任务卸载到MEC节点。由于其巨大的利益,MEC已经引起了广泛的关注和广泛的作品。但是,很少有人解决了分布式用户移动性引起的任务迁移问题,这不能以服务质量(QoS)考虑而忽略。在本文中,我们研究了任务迁移问题,并尝试最小化迁移能源预算下的任务的平均完成时间。有多个独立的用户,并且每个移动用户的运动具有顺序决策过程,因此基于Markov链模型的增强学习算法应用于低计算复杂度。为了进一步促进用户之间的合作,我们根据反事实多代理(COMA)强化学习方法设计了一种分布式任务迁移算法来解决这个问题。进行了广泛的实验,以评估该分布式任务迁移算法的性能。与没有迁移(NM)和单代理演员 - 评论家(AC)算法相比,所提出的分布式任务迁移算法可以达到平均完成时间的增加30-50%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号