All-optical multilayer perceptrons differ in various ways from the ideal neural network model.Examples are the use of nonideal activation functions, which are truncated, asymmetric, and have anonstandard gain; restriction of the network parameters to non-negative values, and the limitedaccuracy of the weights. A backpropagation-based learning rule is presented that compensates forthese nonidealities and enables the implementation of all-optical multilayer perceptrons wherelearning occurs under computer control. The good performance of this learning rule, even when usinga small number of weight levels, is illustrated by a series of computer simulations incorporating thenonidealities.
展开▼