首页> 外文会议>Annual Conference on Neural Information Processing Systems >PSVM: Parallelizing Support Vector Machines on Distributed Computers
【24h】

PSVM: Parallelizing Support Vector Machines on Distributed Computers

机译:PSVM:并行化分布式计算机上的支持向量机

获取原文

摘要

Support Vector Machines (SVMs) suffer from a widely recognized scalability problem in both memory use and computational time. To improve scalability, we have developed a parallel SVM algorithm (PSVM), which reduces memory use through performing a row-based, approximate matrix factorization, and which loads only essential data to each machine to perform parallel computation. Let n denote the number of training instances, p the reduced matrix dimension after factorization (p is significantly smaller than n), and m the number of machines. PSVM reduces the memory requirement from O(n~2) to O(np/m), and improves computation time to O(np~2/m). Empirical study shows PSVM to be effective. PSVM Open Source is available for download at http://code.google.com/p/psvm/.
机译:支持向量机(SVM)在内存使用和计算时间上都受到广泛认可的可伸缩性问题。为了提高可伸缩性,我们开发了并行SVM算法(PSVM),该算法通过执行基于行的近似矩阵分解来减少内存使用,并且仅将基本数据加载到每台计算机上以执行并行计算。令n表示训练实例的数量,p表示分解后的缩减矩阵维数(p显着小于n),m表示机器数量。 PSVM将内存需求从O(n〜2)减少到O(np / m),并将计算时间缩短到O(np〜2 / m)。实证研究表明PSVM是有效的。可从http://code.google.com/p/psvm/下载PSVM开源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号