首页> 外国专利> Multi-layer neural network processing by neural network accelerator using package of host-weighted merged weights and layer-specific instructions

Multi-layer neural network processing by neural network accelerator using package of host-weighted merged weights and layer-specific instructions

机译:使用主机加权合并权重和特定于层的指令包,由神经网络加速器进行多层神经网络处理

摘要

In the disclosed methods and systems for processing in a neural network system, the host computer system 402 memory 226 shared with the neural network accelerator 238 multiple weight matrices associated with multiple layers of the neural network. (602). The host computer system further assembles a plurality of layer-specific instructions into an instruction package (610). Each layer-specific instruction specifies processing of a respective layer of a plurality of layers of a neural network, and respective offsets of weight matrices in shared memory. The host computer system writes input data and instruction packages to shared memory (612, 614). The neural network accelerator reads an instruction package from shared memory (702) and processes a plurality of layer-specific instructions in the instruction package (702 to 712).
机译:在所公开的用于在神经网络系统中进行处理的方法和系统中,主计算机系统402的存储器226与神经网络加速器238共享与神经网络的多层相关联的多个权重矩阵。 (602)。主机系统还将多个特定于层的指令组装到指令包中(610)。每个特定于层的指令指定对神经网络的多个层中的各个层的处理以及共享存储器中权重矩阵的各个偏移。主机系统将输入数据和指令包写入共享存储器(612、614)。神经网络加速器从共享存储器中读取指令包(702),并处理指令包中的多个特定于层的指令(702至712)。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号