首页> 外文会议>IEEE International Symposium on Computer Architecture and High Performance Computing >Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters
【24h】

Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters

机译:为GPU集群上的流应用程序设计高性能异构广播

获取原文

摘要

High-performance streaming applications are beginning to leverage the compute power offered by graphics processing units (GPUs) and high network throughput offered by high performance interconnects such as InfiniBand (IB) to boost their performance and scalability. These applications rely heavily on broadcast operations to move data, which is stored in the host memory, from a single source-typically live-to multiple GPU-based computing sites. While homogeneous broadcast designs take advantage of IB hardware multicast feature to boost their performance, their heterogeneous counterpart requires an explicit data movement between Host and GPU, which significantly hurts the overall performance. There is a dearth of efficient heterogeneous broadcast designs for streaming applications especially on emerging multi-GPU configurations. In this work, we propose novel techniques to fully and conjointly take advantage of NVIDIA GPUDirect RDMA (GDR), CUDA inter-process communication (IPC) and IB hardware multicast features to design high-performance heterogeneous broadcast operations for modern multi-GPU systems. We propose intra-node, topology-aware schemes to maximize the performance benefits while minimizing the utilization of valuable PCIe resources. Further, we optimize the communication pipeline by overlapping the GDR + IB hardware multicast operations with CUDA IPC operations. Compared to existing solutions, our designs show up to 3X improvement in the latency of a heterogeneous broadcast operation. Our designs also show up to 67% improvement in execution time of a streaming benchmark on a GPU-dense Cray CS-Storm system with 88 GPUs.
机译:高性能流应用程序开始利用图形处理单元(GPU)提供的计算能力和高性能互连(例如InfiniBand(IB))提供的高网络吞吐量来提高其性能和可伸缩性。这些应用程序严重依赖广播操作来将存储在主机内存中的数据从单个源(通常是实时传输)移动到多个基于GPU的计算站点。同质广播设计利用IB硬件多播功能来提高其性能,而同质广播设计则需要在主机和GPU之间进行明确的数据移动,这会严重损害整体性能。对于流应用程序,尤其是在新兴的多GPU配置上,缺乏有效的异构广播设计。在这项工作中,我们提出了新颖的技术来充分并结合地利用NVIDIA GPUDirect RDMA(GDR),CUDA进程间通信(IPC)和IB硬件多播功能来为现代多GPU系统设计高性能的异构广播操作。我们提出了节点内拓扑感知方案,以最大程度地提高性能优势,同时最大程度地减少宝贵的PCIe资源的利用率。此外,我们通过将GDR + IB硬件多播操作与CUDA IPC操作重叠来优化通信管道。与现有解决方案相比,我们的设计将异构广播操作的延迟提高了3倍。我们的设计还显示,在具有88个GPU的GPU密集型Cray CS-Storm系统上,流基准测试的执行时间最多可缩短67%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号