Examines the function approximation properties of the "random neural-network model" or GNN, The output of the GNN can be computed from the firing probabilities of selected neurons. We consider a feedforward bipolar GNN (BGNN) model which has both "positive and negative neurons" in the output layer, and prove that the BGNN is a universal function approximator. Specifically, for any f/spl isin/C([0,1]/sup s/) and any /spl epsiv/<0, we show that there exists a feedforward BGNN which approximates f uniformly with error less than /spl epsiv/. We also show that after some appropriate clamping operation on its output, the feedforward GNN is also a universal function approximator.
展开▼