Using Taguchi''s Method of Experimental Design to Control Errors in Layered Perceptrons

William E. Bond, Missouri University of Science and Technology
Gerald E. Peterson
Daniel C. St. Clair
Stephen R. Aylward

This document has been relocated to http://scholarsmine.mst.edu/comsci_facwork/286

There were 8 downloads as of 28 Jun 2016.

Abstract

A significant problem in the design and construction of an artificial neural network for function approximation is limiting the magnitude and the variance of errors when the network is used in the field. Network errors can occur when the training data does not faithfully represent the required function due to noise or low sampling rates, when the network's flexibility does not match the variability of the data, or when the input data to the resultant network is noisy. This paper reports on several experiments whose purpose was to rank the relative significance of these error sources and thereby find neural network design principles for limiting the magnitude and variance of network errors