Online Reinforcement Learning Control of Unknown Nonaffine Nonlinear Discrete Time Systems

Qinmin Yang
Jagannathan Sarangapani, Missouri University of Science and Technology

This document has been relocated to http://scholarsmine.mst.edu/ele_comeng_facwork/1187

There were 1 downloads as of 27 Jun 2016.

Abstract

In this paper, a novel neural network (NN) based online reinforcement learning controller is designed for nonaffine nonlinear discrete-time systems with bounded disturbances. The nonaffine systems are represented by nonlinear auto regressive moving average with exogenous input (NARMAX) model with unknown nonlinear functions. An equivalent affine-like representation for the tracking error dynamics is developed first from the original nonaffine system. Subsequently, a reinforcement learning-based neural network (NN) controller is proposed for the affine-like nonlinear error dynamic system. The control scheme consists of two NNs. One NN is designated as the critic, which approximates a predefined long-term cost function, whereas an action NN is employed to derive a control signal for the system to track a desired trajectory while minimizing the cost function simultaneously. Offline NN training is not required and online NN weight tuning rules are derived. By using the standard Lyapunov approach, the uniformly ultimate boundedness (UUB) of the tracking error and weight estimates is demonstrated.