Abstract

A significant problem in the design and construction of an artificial neural network for function approximation is limiting the magnitude and the variance of errors when the network is used in the field. Network errors can occur when the training data does not faithfully represent the required function due to noise or low sampling rates, when the network's flexibility does not match the variability of the data, or when the input data to the resultant network is noisy. This paper reports on several experiments whose purpose was to rank the relative significance of these error sources and thereby find neural network design principles for limiting the magnitude and variance of network errors

Department(s)

Computer Science

Keywords and Phrases

Taguchi's Method; Approximation Theory; Error Analysis; Error Sources; Error Variance; Function Approximation; Layered Perceptrons; Multilayer Perceptrons; Neural Network

International Standard Serial Number (ISSN)

1045-9227

Document Type

Article - Journal

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 1995 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jan 1995

Share

 
COinS