Generalized Neuron: Feedforward and Recurrent Architectures
Abstract
Feedforward neural networks such as multilayer perceptrons (MLP) and recurrent neural networks are widely used for pattern classification, nonlinear function approximation, density estimation and time series prediction. A large number of neurons are usually required to perform these tasks accurately, which makes the MLPs less attractive for computational implementations on resource constrained hardware platforms. This paper highlights the benefits of feedforward and recurrent forms of a compact neural architecture called generalized neuron (GN). This paper demonstrates that GN and recurrent GN (RGN) can perform good classification, nonlinear function approximation, density estimation and chaotic time series prediction. Due to two aggregation functions and two activation functions, GN exhibits resilience to the nonlinearities of complex problems. Particle swarm optimization (PSO) is proposed as the training algorithm for GN and RGN. Due to a small number of trainable parameters, GN and RGN require less memory and computational resources. Thus, these structures are attractive choices for fast implementations on resource constrained hardware platforms.
Recommended Citation
R. V. Kulkarni and G. K. Venayagamoorthy, "Generalized Neuron: Feedforward and Recurrent Architectures," Neural Networks, Elsevier, Sep 2009.
The definitive version is available at https://doi.org/10.1016/j.neunet.2009.07.027
Department(s)
Electrical and Computer Engineering
Sponsor(s)
National Science Foundation (U.S.)
Keywords and Phrases
Density Estimation; Generalized Neuron; Nonlinear Function Approximation; Particle Swarm Optimization (PSO); Recurrent Generalized Neuron; Classification
International Standard Serial Number (ISSN)
0893-6080
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2009 Elsevier, All rights reserved.
Publication Date
01 Sep 2009