A significant problem in the design and construction of an artificial neural network for function approximation is limiting the magnitude and the variance of errors when the network is used in the field. Network errors can occur when the training data does not faithfully represent the required function due to noise or low sampling rates, when the network's flexibility does not match the variability of the data, or when the input data to the resultant network is noisy. This paper reports on several experiments whose purpose was to rank the relative significance of these error sources and thereby find neural network design principles for limiting the magnitude and variance of network errors
W. E. Bond et al., "Using Taguchi''s Method of Experimental Design to Control Errors in Layered Perceptrons," IEEE Transactions on Neural Networks, Institute of Electrical and Electronics Engineers (IEEE), Jan 1995.
The definitive version is available at https://doi.org/10.1109/72.392257
Keywords and Phrases
Taguchi's Method; Approximation Theory; Error Analysis; Error Sources; Error Variance; Function Approximation; Layered Perceptrons; Multilayer Perceptrons; Neural Network
International Standard Serial Number (ISSN)
Article - Journal
© 1995 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jan 1995