Function Approximations with Multilayer Perceptrons and Simultaneous Recurrent Metworks

Abstract

Approximation of highly nonlinear functions is an important area of computational intelligence. The relevance of this area can be found traced to numerous fields, from controlling nonlinear complex dynamics, to many practical and difficult applications such as a brain-like intelligent control and planning. It has been widely accepted for a considerable period that multilayer feed forward neural networks or Multilayer Perceptrons (MLPs) with a sigmoidal activation functions have been termed as “universal function approximators”. The inability to approximate non-smooth functions by using conventional MLPs and attempts to replicate the human brain resulted in many new types of ANN such as the simultaneous recurrent networks (SRNs). The focus of this paper is threefold, namely; first, to demonstrate that a neural network can learn or approximate another neural network. Second, to compare the capabilities of a Multilayer Perceptron with a Simultaneous Recurrent Network for approximating or learning any randomly generated non-linear function. Third, to demonstrate and utilize other computational intelligence paradigms such as particle swarm optimization (PSO) to obtain better convergence in training of MLP and SRN neural networks. A non-linear function f1 is generated using a MLP. This function f1 is approximated by both a MLP trained with conventional Backpropagation (BP) and a SRN trained with PSO. The nonlinearity and complexity of f1 is varied by changing the size of the dataset generated. Similarly, another non-linear function f2 is generated using a SRN. This function f2 is approximated by both a MLP trained with BO (MLPBP) and a SRN trained with PSO (SRNPSO). The comparison of the MLPBP and SRNPSO is based on two factors, the number of epochs taken and the average mean square error achieved. Figures 1 and 2 show the structures of the MLP and SRN respectively.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Backpropagation Training; Functional Approximation; Multilayer Perceptrons; Particle Swarm Optimization (PSO); Simultaneous Recurrent Networks

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2004 Knowledge Engineering and Discovery Research Institute, All rights reserved.

Publication Date

01 Dec 2004

This document is currently not available here.

Share

 
COinS