An unsupervised perceptron algorithm and several generalizations are presented in this paper. Based on stochastic approximation theory, some general analysis of neural network learning algorithms is provided. Also, the definitions of convergence speed and robustness of a learning algorithm are given. It is shown that the unsupervised perceptron algorithms converge to the principal component of the input data under some conditions. In addition, the convergence speeds and robustness of the unsupervised perceptrons, the Oja (1982, 1983) and the Widrow-Hoff algorithms are given in explicit forms.
展开▼