Online Reinforcement Learning-Based Neural Network Controller Design for Affine Nonlinear Discrete-Time Systems

Qinmin Yang
Jagannathan Sarangapani, Missouri University of Science and Technology

This document has been relocated to http://scholarsmine.mst.edu/ele_comeng_facwork/1798

There were 5 downloads as of 28 Jun 2016.

Abstract

In this paper, a novel reinforcement learning neural network (NN)-based controller, referred to adaptive critic controller, is proposed for general multi-input and multi- output affine unknown nonlinear discrete-time systems in the presence of bounded disturbances. Adaptive critic designs consist of two entities, an action network that produces optimal solution and a critic that evaluates the performance of the action network. The critic is termed adaptive as it adapts itself to output the optimal cost-to-go function and the action network is adapted simultaneously based on the information from the critic. In our online learning method, one NN is designated as the critic NN, which approximates the Bellman equation. An action NN is employed to derive the control signal to track a desired system trajectory while minimizing the cost function. Online updating weight tuning schemes for these two NNs are also derived and uniformly ultimate boundedness (UUB) of the tracking error and weight estimates is shown. The effectiveness of the controller is evaluated on a two-link robotic arm system.