Online Solution to the Linear Quadratic Tracking Problem of Continuous-Time Systems using Reinforcement Learning

Abstract

In this paper, reinforcement learning (RL) is employed to find a casual solution to the linear quadratic tracker (LQT) for continuous-time systems online in real time. Although several RL techniques are developed in the literature to solve the LQ regulator, to our knowledge, there is no rigorous result for using RL to solve the LQ tracker. This is mainly because of the requirement for computing a feedforward term in the tracker control which must be done in a noncausal manner backwards in time. To deal with this noncausality problem, an augmented system composed of the original system and the command generator dynamics is constructed, and an augmented LQT algebraic Riccati equation is derived for solving the LQT problem. In this formulation, one can apply RL techniques to solve the LQT problem, computing the feedforward term and the feedback term simultaneously online in real time. The convergence of the proposed online algorithms to the optimal control solution is verified. To show the efficiency of the proposed approach, a simulation example is provided.

Meeting Name

52nd IEEE Conference on Decision and Control (2013: Dec. 10-13, Florence, Italy)

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Nonlinear Control Systems; Optimal Control Systems; Reinforcement Learning; Riccati Equations; Algebraic Riccati Equations; Augmented Systems; Generator Dynamic; Linear Quadratic Trackers; Linear Quadratic Tracking; On-Line Algorithms; Optimal Control Solution; Simulation Example; Problem Solving

International Standard Book Number (ISBN)

978-1467357173

International Standard Serial Number (ISSN)

0191-2216

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2013 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Dec 2013

Share

 
COinS