Continuous-Time Q-Learning for Infinite-Horizon Discounted Cost Linear Quadratic Regulator Problems

Abstract

This paper presents a method of Q-learning to solve the discounted linear quadratic regulator (LQR) problem for continuous-time (CT) continuous-state systems. Most available methods in the existing literature for CT systems to solve the LQR problem generally need partial or complete knowledge of the system dynamics. Q-learning is effective for unknown dynamical systems, but has generally been well understood only for discrete-time systems. The contribution of this paper is to present a Q-learning methodology for CT systems which solves the LQR problem without having any knowledge of the system dynamics. A natural and rigorous justified parameterization of the Q-function is given in terms of the state, the control input, and its derivatives. This parameterization allows the implementation of an online Q-learning algorithm for CT systems. The simulation results supporting the theoretical development are also presented.

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Cost Functions; Digital Control Systems; Discrete Time Control Systems; Dynamic Programming; Dynamical Systems; Iterative Methods; Learning Algorithms; Online Systems; Problem Solving; Reinforcement Learning; System Theory; Approximate Dynamic Programming; Continuous Time Dynamical System; Infinite Horizon Discounted Costs; Integral Reinforcement Learning (IRL); Optimal Controls; Q-Learning; Value Iteration; Continuous Time Systems; Approximate Dynamic Programming (ADP); Continuous-Time Dynamical Systems; Infinite-Horizon Discounted Cost Function; Optimal Control; Value Iteration (VI)

International Standard Serial Number (ISSN)

2168-2267

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2015 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Feb 2015

Share

 
COinS