首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs
【24h】

Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs

机译:高效数据装载机,用于大图上的快速采样的GNN训练

获取原文
获取原文并翻译 | 示例
           

摘要

Emerging graph neural networks (GNNs) have extended the successes of deep learning techniques against datasets like images and texts to more complex graph-structured data. By leveraging GPU accelerators, existing frameworks combine mini-batch and sampling for effective and efficient model training on large graphs. However, this setup faces a scalability issue since loading rich vertex features from CPU to GPU through a limited bandwidth link usually dominates the training cycle. In this article, we propose PaGraph, a novel, efficient data loader that supports general and efficient sampling-based GNN training on single-server with multi-GPU. PaGraph significantly reduces the data loading time by exploiting available GPU resources to keep frequently-accessed graph data with a cache. It also embodies a lightweight yet effective caching policy that takes into account graph structural information and data access patterns of sampling-based GNN training simultaneously. Furthermore, to scale out on multiple GPUs, PaGraph develops a fast GNN-computation-aware partition algorithm to avoid cross-partition access during data-parallel training and achieves better cache efficiency. Finally, it overlaps data loading and GNN computation for further hiding loading costs. Evaluations on two representative GNN models, GCN and GraphSAGE, using two sampling methods, Neighbor and Layer-wise, show that PaGraph could eliminate the data loading time from the GNN training pipeline, and achieve up to 4.8x performance speedup over the state-of-the-art baselines. Together with preprocessing optimization, PaGraph further delivers up to 16.0x end-to-end speedup.
机译:新兴图形神经网络(GNNS)已经扩展了对数据集的深度学习技术的成功,例如图像和文本到更复杂的图形结构数据。通过利用GPU加速器,现有框架结合了迷你批量和采样对大图中的有效和有效的模型训练。但是,此设置面向可扩展性问题,因为通过Limited带宽链路将CPU加载到GPU的丰富顶点特征通常占主导地位训练周期。在本文中,我们提出了一种在具有多GPU的单服务器上支持一般和高效的采样的GNN培训的新颖高效数据装载机。 PAGRAPH通过利用可用的GPU资源来显着降低数据加载时间,以将频繁访问的图形数据与缓存保持常见。它还体现了轻量级但有效的缓存策略,它同时考虑了基于采样的GNN训练的图形结构信息和数据访问模式。此外,在多个GPU上缩放,PAGRAPH开发快速的GNN - 计算感知分区算法,以避免在数据并行训练期间的交叉分区访问,并实现更好的高速缓存效率。最后,它与数据加载和GNN计算重叠,以进一步隐藏加载成本。使用两种采样方法,邻居和层展的两个代表性GNN模型,GCN和Graphsage的评估表明PAGRAPH可以消除来自GNN训练管道的数据加载时间,并通过状态达到高达4.8x的性能加速 - 艺术基线。与预处理优化一起,PAGRAPH进一步提供高达16.0倍的端到端加速。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号