首页> 外文会议>IEEE International Conference on Parallel and Distributed Systems >Proactive Content Caching for Internet-of-Vehicles based on Peer-to-Peer Federated Learning
【24h】

Proactive Content Caching for Internet-of-Vehicles based on Peer-to-Peer Federated Learning

机译:基于点对点联合学习的车辆互联网上的主动内容缓存

获取原文

摘要

To cope with the increasing content requests from emerging vehicular applications, caching contents at edge nodes is imperative to reduce service latency and network traffic on the Internet-of-Vehicles (IoV). However, the inherent characteristics of IoV, including the high mobility of vehicles and restricted storage capability of edge nodes, cause many difficulties in the design of caching schemes. Driven by the recent advancements in machine learning, learning-based proactive caching schemes are able to accurately predict content popularity and improve cache efficiency, but they need gather and analyse users' content retrieval history and personal data, leading to privacy concerns. To address the above challenge, we propose a new proactive caching scheme based on peer-to-peer federated deep learning, where the global prediction model is trained from data scattered at vehicles to mitigate the privacy risks. In our proposed scheme, a vehicle acts as a parameter server to aggregate the updated global model from peers, instead of an edge node. A dual-weighted aggregation scheme is designed to achieve high global model accuracy. Moreover, to enhance the caching performance, a Collaborative Filtering based Variational AutoEncoder model is developed to predict the content popularity. The experimental results demonstrate that our proposed caching scheme largely outperforms typical baselines, such as Greedy and Most Recently Used caching.
机译:为了应对来自新出现的车辆应用的增加的内容请求,边缘节点的缓存内容必须在车辆上网(IOV)上减少服务延迟和网络流量。然而,IOV的固有特性,包括车辆的高迁移率和边缘节点的限制储存能力,对缓存方案的设计引起了许多困难。由最近的机器学习进步驱动,基于学习的主动缓存方案能够准确地预测内容流行度并提高缓存效率,但他们需要收集和分析用户的内容检索历史和个人数据,从而导致隐私问题。为了解决上述挑战,我们提出了一种基于对等联合深度学习的新的主动缓存方案,其中全球预测模型从车辆分散的数据培训,以减轻隐私风险。在我们所提出的方案中,车辆充当参数服务器,以从对等体聚合更新的全局模型,而不是边缘节点。双重加权聚集方案旨在实现高全球模型精度。此外,为了增强缓存性能,开发了一种基于协作的变化性自动级别模型以预测内容流行度。实验结果表明,我们所提出的缓存计划在很大程度上优于典型的基线,例如贪婪和最近使用的缓存。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号