Optimal Tracking Control of Nonlinear Partially-Unknown Constrained-Input Systems using Integral Reinforcement Learning

Abstract

In this paper, a new formulation for the optimal tracking control problem (OTCP) of continuous-time nonlinear systems is presented. This formulation extends the integral reinforcement learning (IRL) technique, a method for solving optimal regulation problems, to learn the solution to the OTCP. Unlike existing solutions to the OTCP, the proposed method does not need to have or to identify knowledge of the system drift dynamics, and it also takes into account the input constraints a priori. An augmented system composed of the error system dynamics and the command generator dynamics is used to introduce a new nonquadratic discounted performance function for the OTCP. This encodes the input constrains into the optimization problem. A tracking Hamilton-Jacobi- Bellman (HJB) equation associated with this nonquadratic performance function is derived which gives the optimal control solution. An online IRL algorithm is presented to learn the solution to the tracking HJB equation without knowing the system drift dynamics. Convergence to a near-optimal control solution and stability of the whole system are shown under a persistence of excitation condition. Simulation examples are provided to show the effectiveness of the proposed method.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Control; Navigation; Neural Networks; Optimization; Continuous Time Nonlinear Systems; Input Constrainers; Near-Optimal Control; Optimal Control Solution; Optimal Tracking Control; Optimization Problems; Performance Functions; Persistence of Excitation; Reinforcement Learning; Integral Reinforcement Learning

International Standard Serial Number (ISSN)

0005-1098

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2014 Elsevier, All rights reserved.

Publication Date

01 Jul 2014

Share

 
COinS