Parallel Implementation of a Recursive Least-Squares Neural Network Training Method on the Intel iPSC/2

Abstract

An algorithm based on the Marquardt-Levenberg least-squares optimization method has been shown by S. Kollias and D. Anastasiou to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method lends itself to a more efficient implementation on distributed memory parallel computers than do standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer. Two applications are given which demonstrate the faster real-time learning rate of the least-squares method over that of gradient descent.

Department(s)

Computer Science

Second Department

Mechanical and Aerospace Engineering

International Standard Serial Number (ISSN)

0743-7315

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 1993 Academic Press Inc., All rights reserved.

Publication Date

01 May 1993

Share

 
COinS