A One-Layer Recurrent Neural Network for Constrained Pseudoconvex Optimization and its Application for Dynamic Portfolio Optimization
Abstract
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed.
Recommended Citation
Q. Liu et al., "A One-Layer Recurrent Neural Network for Constrained Pseudoconvex Optimization and its Application for Dynamic Portfolio Optimization," Neural Networks, vol. 26, pp. 99 - 109, Elsevier, Feb 2012.
The definitive version is available at https://doi.org/10.1016/j.neunet.2011.09.001
Department(s)
Computer Science
Keywords and Phrases
Convergence; Differential Inclusion; Lyapunov Function; Pseudoconvex Optimization; Recurrent Neural Networks
International Standard Serial Number (ISSN)
0893-6080
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2012 Elsevier, All rights reserved.
Publication Date
01 Feb 2012