Adaptive Critic Based Neural Networks for Control
Dynamic Programming is an exact method of determining optimal control for a discretized system. Unfortunately, for nonlinear systems the computations necessary with this method become prohibitive. This study investigates the use of adaptive neural networks that utilize dynamic programming methodology to develop near optimal control laws. First, a one dimensional infinite horizon problem is examined. Problems involving cost functions with final state constraints are considered for one dimensional linear and nonlinear systems. A two dimensional linear problem is also investigated. In addition to these examples, an example of the corrective capabilities of critics is shown. Synthesis of the networks in this study needs no external training; they do not need any apriori knowledge of the functional form of control. Comparison with specific optimal control techniques show that the networks yield optimal control over the entire range of training.
S. N. Balakrishnan and V. Biega, "Adaptive Critic Based Neural Networks for Control," Proceedings of the American Control Conference (1995, Seattle, WA), Institute of Electrical and Electronics Engineers (IEEE), Jan 1995.
American Control Conference (1995, Seattle, WA)
Mechanical and Aerospace Engineering
Article - Conference proceedings
© 1995 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
This document is currently not available here.