Abstract
In this paper, nonlinear functions generated by randomly initialized multilayer perceptrons (MLPs) and simultaneous recurrent neural networks (SRNs) are learned by MLPs and SRNs. Training SRNs is a challenging task and a new learning algorithm - DEPSO is introduced. DEPSO is a standard particle swarm optimization (PSO) algorithm with the addition of a differential evolution step to aid in swarm convergence. The results from DEPSO are compared with the standard backpropagation (BP) and PSO algorithms. It is further verified that functions generated by SRNs are harder to learn than those generated by MLPs but DEPSO provides better learning capabilities for the functions generated by MLPs and SRNs as compared to BP and PSO. These three algorithms are also trained on several benchmark functions to confirm results.
Recommended Citation
R. Cleaver and G. K. Venayagamoorthy, "Learning Nonlinear Functions with MLPs and SRNs," Proceedings of the International Joint Conference on Neural Networks, 2009. IJCNN 2009, Institute of Electrical and Electronics Engineers (IEEE), Jun 2009.
The definitive version is available at https://doi.org/10.1109/IJCNN.2009.5179060
Meeting Name
International Joint Conference on Neural Networks, 2009. IJCNN 2009
Department(s)
Electrical and Computer Engineering
Sponsor(s)
National Science Foundation (U.S.)
Keywords and Phrases
Benchmark Functions; Differential Evolution; Learning Capabilities; Multi-Layer Perceptrons; Nonlinear Functions; PSO Algorithms; Particle Swarm Optimization Algorithm
Document Type
Article - Conference proceedings
Document Version
Final Version
File Type
text
Language(s)
English
Rights
© 2009 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Publication Date
01 Jun 2009