首页> 外文期刊>IEEE Transactions on Parallel and Distributed Systems >Semi-External Memory Sparse Matrix Multiplication for Billion-Node Graphs
【24h】

Semi-External Memory Sparse Matrix Multiplication for Billion-Node Graphs

机译:十亿个节点图的半外部内存稀疏矩阵乘法

获取原文
获取原文并翻译 | 示例
           

摘要

Sparse matrix multiplication is traditionally performed in memory and scales to large matrices using the distributed memory of multiple nodes. In contrast, we scale sparse matrix multiplication beyond memory capacity by implementing sparse matrix dense matrix multiplication (SpMM) in a semi-external memory (SEM) fashion; i.e., we keep the sparse matrix on commodity SSDs and dense matrices in memory. Our SEM-SpMM incorporates many in-memory optimizations for large power-law graphs. It outperforms the in-memory implementations of Trilinos and Intel MKL and scales to billion-node graphs, far beyond the limitations of memory. Furthermore, on a single large parallel machine, our SEM-SpMM operates as fast as the distributed implementations of Trilinos using five times as much processing power. We also run our implementation in memory (IM-SpMM) to quantify the overhead of keeping data on SSDs. SEM-SpMM achieves almost 100 percent performance of IM-SpMM on graphs when the dense matrix has more than four columns; it achieves at least 65 percent performance of IM-SpMM on all inputs. We apply our SpMM to three important data analysis tasks—PageRank, eigensolving, and non-negative matrix factorization—and show that our SEM implementations significantly advance the state of the art.
机译:稀疏矩阵乘法通常在内存中执行,并使用多个节点的分布式内存扩展到大型矩阵。相比之下,我们通过以半外部存储器(SEM)方式实现稀疏矩阵密集矩阵乘法(SpMM),将稀疏矩阵乘法扩展到超出存储容量的范围。即,我们将稀疏矩阵保留在商品SSD和内存中的密集矩阵上。我们的SEM-SpMM结合了许多针对大型幂律图的内存优化。它优于Trilinos和Intel MKL在内存中的实现,并且可以扩展到十亿个节点的图形,远远超出了内存的限制。此外,在一台大型并行计算机上,我们的SEM-SpMM的运行速度与Trilinos的分布式实施速度一样快,而处理能力却是其五倍。我们还在内存(IM-SpMM)中运行实现,以量化将数据保留在SSD上的开销。当密集矩阵具有四列以上时,SEM-SpMM可以在图形上实现IM-SpMM几乎100%的性能;在所有输入上,IM-SpMM的性能至少达到65%。我们将SpMM应用于三个重要的数据分析任务-PageRank,本征求解和非负矩阵分解-并显示我们的SEM实现显着提高了技术水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号