首页> 外文期刊>Vision Systems Design >Edge device uses inexpensive, off-the-shelf components for deep learning inference
【24h】

Edge device uses inexpensive, off-the-shelf components for deep learning inference

机译:边缘设备使用廉价的现成组件进行深度学习推理

获取原文
获取原文并翻译 | 示例
           

摘要

Inference involves the use of a neural network trained through deep learning that makes predictions on new data. Inference is a more effective way to answer complex and suhjective questions than traditional rules-based image analysis. By optimizing networks to run on low-power hardware, inference can be run on the edge, eliminating dependency on a central server for image analysis, which can lead to lower latency, higher reliability, and improved security. Here, the specification and building of an edge-based deep learning inference system suitable for real-life deployment that will cost users less than $1,000 is described.
机译:推理涉及通过深度学习训练的神经网络的使用,该网络对新数据进行预测。与传统的基于规则的图像分析相比,推理是回答复杂和多变的问题的更有效方法。通过优化网络以在低功耗硬件上运行,推理可以在边缘运行,从而消除了对中央服务器进行图像分析的依赖性,这可以导致更低的延迟,更高的可靠性和更高的安全性。在这里,将介绍适用于实际部署的,基于边缘的深度学习推理系统的规范和构建,该系统将使用户花费不到1,000美元。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号