...
首页> 外文期刊>IEEE transactions on very large scale integration (VLSI) systems >NeuPart: Using Analytical Models to Drive Energy-Efficient Partitioning of CNN Computations on Cloud-Connected Mobile Clients
【24h】

NeuPart: Using Analytical Models to Drive Energy-Efficient Partitioning of CNN Computations on Cloud-Connected Mobile Clients

机译:Neupart:使用分析模型在云连接的移动客户端上推动CNN计算的节能划分

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Data processing on convolutional neural networks (CNNs) places a heavy burden on energy-constrained mobile platforms. This article optimizes energy on a mobile client by partitioning CNN computations between in situ processing on the client and offloaded computations in the cloud. A new analytical CNN energy model is formulated, capturing all major components of the in situ computation, for ASIC-based deep learning accelerators. The model is benchmarked against measured silicon data. The analytical framework is used to determine the optimal energy partition point between the client and the cloud at runtime. On standard CNN topologies, partitioned computation is demonstrated to provide significant energy savings on the client over a fully cloud-based computation or fully in situ computation. For example, at 80 Mbps effective bit rate and 0.78 W transmission power, the optimal partition for AlexNet [SqueezeNet] saves up to 52.4% [73.4%] energy over a fully cloud-based computation and 27.3% [28.8%] energy over a fully in situ computation.
机译:卷积神经网络(CNNS)的数据处理在能量受限的移动平台上占据了沉重的负担。本文通过在客户端上的原位处理之间进行CNN计算来优化移动客户端的能量,并在云中卸载计算。制定了一种新的分析CNN能量模型,捕获了原位计算的所有主要组成部分,适用于基于ASIC的深度学习加速器。该模型采用测量的硅数据基准测试。分析框架用于在运行时确定客户端和云之间的最佳能量分区点。在标准CNN拓扑上,分区计算被证明在客户端基于云的计算或完全原位计算上提供显着的节能。例如,在80 Mbps有效比特率和0.78W的传输功率下,AlexNet [Screezenet]的最佳分区可节省高达52.4%[73.4%]的能量,在完全云的计算中,27.3%[28.8%]能量优势完全原位计算。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号