Inference involves the use of a neural network trained through deep learning that makes predictions on new data. Inference is a more effective way to answer complex and suhjective questions than traditional rules-based image analysis. By optimizing networks to run on low-power hardware, inference can be run on the edge, eliminating dependency on a central server for image analysis, which can lead to lower latency, higher reliability, and improved security. Here, the specification and building of an edge-based deep learning inference system suitable for real-life deployment that will cost users less than $1,000 is described.
展开▼