A General Insight into the Effect of Neuron Structure on Classification

Abstract

This paper gives a general insight into how the neuron structure in a multilayer perceptron (MLP) can affect the ability of neurons to deal with classification. Most of the common neuron structures are based on monotonic activation functions and linear input mappings. In comparison, the proposed neuron structure utilizes a nonmonotonic activation function and/or a nonlinear input mapping to increase the power of a neuron. An MLP of these high power neurons usually requires a less number of hidden nodes than conventional MLP for solving classification problems. The fewer number of neurons is equivalent to the smaller number of network weights that must be optimally determined by a learning algorithm. The performance of learning algorithm is usually improved by reducing the number of weights, i. e., the dimension of the search space. This usually helps the learning algorithm to escape local optimums, and also, the convergence speed of the algorithm is increased regardless of which algorithm is used for learning. Several 2-dimensional examples are provided manually to visualize how the number of neurons can be reduced by choosing an appropriate neuron structure. Moreover, to show the efficiency of the proposed scheme in solving real-world classification problems, the Iris data classification problem is solved using an MLP whose neurons are equipped by nonmonotonic activation functions, and the result is compared with two well-known monotonic activation functions.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Classification; Iris Data Classification; Multilayer Perceptron (MLP); Neuron Structure; Nonlinear Input Mapping; Nonmonotonic Activation Function

International Standard Serial Number (ISSN)

0219-1377

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2012 Springer Verlag, All rights reserved.

Publication Date

01 Jan 2012

Share

 
COinS