机译:深度学习强度缩放的案例:用混合并行性训练大3D CNN
Tokyo Inst Technol Tokyo 1528550 Japan|Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Lawrence Livermore Natl Lab Livermore CA 94551 USA|Swiss Fed Inst Technol CH-8092 Zurich Switzerland;
Lawrence Livermore Natl Lab Livermore CA 94551 USA|Univ Oregon Eugene OR 97403 USA;
Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Tokyo Inst Technol Tokyo 1528550 Japan|RIKEN Ctr Computat Sci Kobe Hyogo 6500047 Japan;
Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Lawrence Livermore Natl Lab Livermore CA 94551 USA;
Training; Three-dimensional displays; Computational modeling; Parallel processing; Solid modeling; Memory management; Image segmentation; Deep learning; convolutional neural network; model-parallel training; hybrid-parallel training;
机译:混合电气/光学交换机架构,用于培训大规模的分布式深度学习
机译:nemesyst:一种混合并行性深入学习的框架,适用于能够实现食物零售制冷系统的物联网
机译:3D CNN-PCA:基于深度学习的复杂地理典礼的参数化
机译:利用细粒度的并行性提高CNN培训的规模
机译:H3DNET:用于分层3D对象分类的深度学习框架
机译:大规模在线学习环境的学术情感分类与识别方法-基于A-CNN和LSTM-ATT深度学习流水线方法
机译:深度学习强度缩放的案例:用混合并行性训练大3D CNN