Parallel Implementation of a Recursive Least Squares Neural Network Training Method on the Intel IPSC/2

Bruce M. McMillin, Missouri University of Science and Technology
K. Krishnamurthy, Missouri University of Science and Technology
James Edward Steck
M. Reza Ashouri
Gary G. Leininger

This document has been relocated to http://scholarsmine.mst.edu/comsci_facwork/203

There were 2 downloads as of 27 Jun 2016.

Abstract

An algorithm based on the Marquardt-Levenberg least-square optimization method has been shown by S. Kollias and D. Anastassiou (IEEE Trans. on Circuits Syst. vol.36, no.8, p.1092-101, Aug. 1989) to be a much more efficient training method than gradient descent, when applied to some small feedforward neural networks. Yet, for many applications, the increase in computational complexity of the method outweighs any gain in learning rate obtained over current training methods. However, the least-squares method can be more efficiently implemented on parallel architectures than standard methods. This is demonstrated by comparing computation times and learning rates for the least-squares method implemented on 1, 2, 4, 8, and 16 processors on an Intel iPSC/2 multicomputer. Two applications which demonstrate the faster real-time learning rate of the last-squares method over than of gradient descent are given