Training Fuzzy Number Neural Networks with Alpha-cut Refinements

Abstract

In a fuzzy number neural network, the inputs, weights, and outputs are general fuzzy numbers. The requirement that F¯α(1) ⊂F¯α(2) whenever α(1)>α(2) imposes an enormous number of constraints on the weight parameterizations during training. This problem can be solved through a careful choice of weight representation. This new representation is unconstrained, so that standard neural network training techniques may be applied. Unfortunately, fuzzy number neural networks still have many parameters to pick during training, since each weight is represented by a vector. Thus moderate to large fuzzy number neural networks suffer from the usual maladies of very large neural networks. In this paper, we discuss a method for effectively reducing the dimensionality of networks during training. Each fuzzy number weight is represented by the endpoints of its α-cuts for some discretization 0⩽α12<...<αn ⩽1. To reduce dimensionality, training is first done using only a small subset of the αi. After successful training, linear interpolation is used to estimate additional α-cut endpoints. The network is then retrained to tune these interpolated values. This refinement is repeated as needed until the network is fully trained at the desired discretization in &alpha.

Meeting Name

IEEE International Conference on Computational Cybernetics and Sumulation: Systems, Man and Cybernetics (1997: Oct. 12-15, Orlando, FL)

Department(s)

Electrical and Computer Engineering

International Standard Book Number (ISBN)

0000780340531

International Standard Serial Number (ISSN)

1062-922X

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 1997 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jan 1997

Share

 
COinS