In this paper, nonlinear functions generated by randomly initialized multilayer perceptrons (MLPs) and simultaneous recurrent neural networks (SRNs) are learned by MLPs and SRNs. Training SRNs is a challenging task and a new learning algorithm - DEPSO is introduced. DEPSO is a standard particle swarm optimization (PSO) algorithm with the addition of a differential evolution step to aid in swarm convergence. The results from DEPSO are compared with the standard backpropagation (BP) and PSO algorithms. It is further verified that functions generated by SRNs are harder to learn than those generated by MLPs but DEPSO provides better learning capabilities for the functions generated by MLPs and SRNs as compared to BP and PSO. These three algorithms are also trained on several benchmark functions to confirm results.
R. Cleaver and G. K. Venayagamoorthy, "Learning Nonlinear Functions with MLPs and SRNs," Proceedings of the International Joint Conference on Neural Networks, 2009. IJCNN 2009, Institute of Electrical and Electronics Engineers (IEEE), Jun 2009.
The definitive version is available at http://dx.doi.org/10.1109/IJCNN.2009.5179060
International Joint Conference on Neural Networks, 2009. IJCNN 2009
Electrical and Computer Engineering
National Science Foundation (U.S.)
Keywords and Phrases
Benchmark Functions; Differential Evolution; Learning Capabilities; Multi-Layer Perceptrons; Nonlinear Functions; PSO Algorithms; Particle Swarm Optimization Algorithm
Article - Conference proceedings
© 2009 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.