In this paper, a novel reinforcement learning neural network (NN)-based controller, referred to adaptive critic controller, is proposed for general multi-input and multi- output affine unknown nonlinear discrete-time systems in the presence of bounded disturbances. Adaptive critic designs consist of two entities, an action network that produces optimal solution and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function and the action network is adapted simultaneously based on the information from the critic. In our online learning method, one NN is designated as the critic NN, which approximates the Bellman equation. An action NN is employed to derive the control signal to track a desired system trajectory while minimizing the cost function. Online updating weight tuning schemes for these two NNs are also derived and uniformly ultimate boundedness (UUB) of the tracking error and weight estimates is shown. The effectiveness of the controller is evaluated on a two-link robotic arm system.
Q. Yang and J. Sarangapani, "Online Reinforcement Learning-Based Neural Network Controller Design for Affine Nonlinear Discrete-Time Systems," Proceedings of the American Control Conference, 2007. ACC'07, Institute of Electrical and Electronics Engineers (IEEE), Jul 2007.
The definitive version is available at http://dx.doi.org/10.1109/ACC.2007.4282709
American Control Conference, 2007. ACC'07
Electrical and Computer Engineering
National Science Foundation (U.S.)
Keywords and Phrases
MMO Systems; Control System Synthesis; Learning (Artificial Intelligence); Neural Nets; Nonlinear Control Systems; Optimal Control; Discrete-time systems
Article - Conference proceedings
© 2007 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.