首页>
外国专利>
Multi-layer neural network processing by neural network accelerator using package of host-weighted merged weights and layer-specific instructions
Multi-layer neural network processing by neural network accelerator using package of host-weighted merged weights and layer-specific instructions
展开▼
机译:使用主机加权合并权重和特定于层的指令包,由神经网络加速器进行多层神经网络处理
展开▼
页面导航
摘要
著录项
相似文献
摘要
In the disclosed methods and systems for processing in a neural network system, the host computer system 402 memory 226 shared with the neural network accelerator 238 multiple weight matrices associated with multiple layers of the neural network. (602). The host computer system further assembles a plurality of layer-specific instructions into an instruction package (610). Each layer-specific instruction specifies processing of a respective layer of a plurality of layers of a neural network, and respective offsets of weight matrices in shared memory. The host computer system writes input data and instruction packages to shared memory (612, 614). The neural network accelerator reads an instruction package from shared memory (702) and processes a plurality of layer-specific instructions in the instruction package (702 to 712).
展开▼