Abstract

An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer. Two applications which demonstrate the faster real-time learning rate of the last-squares method over than of gradient descent are given

Meeting Name

IJCNN International Joint Conference on Neural Networks (1990: Jun. 17-21, San Diego, CA)

Department(s)

Computer Science

Second Department

Mechanical and Aerospace Engineering

Sponsor(s)

IEEE Neural Networks Council
International Neural Network Society

Keywords and Phrases

Intel IPSC/2; Marquardt-Levenberg Least-Square Optimization Method; Computation Times; Convergence; Learning Rates; Learning Systems; Least Squares Approximations; Neural Nets; Optimisation; Parallel Architectures; Parallel Processing; Recursive Least Squares Neural Network Training; Supervised Learning

Document Type

Article - Conference proceedings

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 1990 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jun 1990

Share

 
COinS