...
首页> 外文期刊>Journal of Computational Neuroscience >Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
【24h】

Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors

机译:在计算范围内的并行网络仿真中进行神经元拆分,可以使用两倍的处理器实现运行时扩展

获取原文
获取原文并翻译 | 示例
           

摘要

Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.
机译:神经元树拓扑方程可以分为两个子树,并可以在不同的处理器上求解,而精度,稳定性或计算工作不会改变。通信成本仅涉及每个子树在每个时间步发送和接收两个双精度值。拆分单元对于在神经网络仿真中实现负载平衡非常有用,尤其是在单元大小范围很广且单元数量与处理器数量大致相同的情况下。对于计算受限的模拟,负载平衡可实现理想的运行时扩展。将单元拆分方法应用于两个已发布的网络模型时,可以在与全单元平衡有效结合使用的两倍数量的处理器上显示出良好的运行时伸缩性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号