Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems using Reinforcement Learning
In this technical note, an online learning algorithm is developed to solve the linear quadratic tracking (LQT) problem for partially-unknown continuous-time systems. It is shown that the value function is quadratic in terms of the state of the system and the command generator. Based on this quadratic form, an LQT Bellman equation and an LQT algebraic Riccati equation (ARE) are derived to solve the LQT problem. The integral reinforcement learning technique is used to find the solution to the LQT ARE online and without requiring the knowledge of the system drift dynamics or the command generator dynamics. The convergence of the proposed online algorithm to the optimal control solution is verified. To show the efficiency of the proposed approach, a simulation example is provided.
H. Modares and F. L. Lewis, "Linear Quadratic Tracking Control of Partially-Unknown Continuous-Time Systems using Reinforcement Learning," IEEE Transactions on Automatic Control, vol. 59, no. 11, pp. 3051-3056, Institute of Electrical and Electronics Engineers (IEEE), Nov 2014.
The definitive version is available at https://doi.org/10.1109/TAC.2014.2317301
Electrical and Computer Engineering
Keywords and Phrases
Causal Solutions; Linear Quadratic Tracking; Policy Iteration; Reinforcement Learning; Causal Solution; Integral Reinforcement Learning
International Standard Serial Number (ISSN)
Article - Journal
© 2014 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Nov 2014