Abstract

In this paper, a novel reinforcement learning neural network (NN)-based controller, referred to adaptive critic controller, is proposed for general multi-input and multi- output affine unknown nonlinear discrete-time systems in the presence of bounded disturbances. Adaptive critic designs consist of two entities, an action network that produces optimal solution and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function and the action network is adapted simultaneously based on the information from the critic. In our online learning method, one NN is designated as the critic NN, which approximates the Bellman equation. An action NN is employed to derive the control signal to track a desired system trajectory while minimizing the cost function. Online updating weight tuning schemes for these two NNs are also derived and uniformly ultimate boundedness (UUB) of the tracking error and weight estimates is shown. The effectiveness of the controller is evaluated on a two-link robotic arm system.

Meeting Name

American Control Conference, 2007. ACC'07

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Sponsor(s)

National Science Foundation (U.S.)

Keywords and Phrases

MMO Systems; Control System Synthesis; Learning (Artificial Intelligence); Neural Nets; Nonlinear Control Systems; Optimal Control; Discrete-time systems

Document Type

Article - Conference proceedings

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 2007 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jul 2007

Share

 
COinS