Off-Policy Reinforcement Learning for Robust Control of Discrete-Time Uncertain Linear Systems
In this paper, an off-policy reinforcement learning method is developed for the robust stabilizing controller design of discrete-time uncertain linear systems. The proposed robust control design consists of two steps. First, the robust control problem is transformed to an optimal control problem. Second, the off-policy RL method is used to design the optimal control policy which guarantees the robust stability of the original system with uncertainty. The condition for the equivalence between the robust control problem and the optimal control problem is discussed. The off-policy does not require any knowledge of the system knowledge and efficiently utilize the data collected from on-line to improve the performance of approximate optimal control policy in each iteration successively. Finally, a simulation example is carried out to verify the effectiveness of the presented algorithm for the robust control problem of discrete-time linear system with uncertainty.
Y. Yang et al., "Off-Policy Reinforcement Learning for Robust Control of Discrete-Time Uncertain Linear Systems," Proceedings of the 36th Chinese Control Conference (2017, Dalian, China), pp. 2507 - 2512, Institute of Electrical and Electronics Engineers (IEEE), Jan 2017.
The definitive version is available at https://doi.org/10.23919/ChiCC.2017.8027737
36th Chinese Control Conference, CCC 2017 (2017: Jul. 26-28, Dalian, China)
Electrical and Computer Engineering
Intelligent Systems Center
Second Research Center/Lab
Center for High Performance Computing Research
China Scholarship Council
National Science Foundation (U.S.)
National Natural Science Foundation of China
Mary K. Finley Missouri Endowment
Missouri University of Science and Technology. Intelligent Systems Center
Keywords and Phrases
Model-Free; Off-Policy Trinforcement Learning; Optimal Control; Robust Control; System Uncertainty
International Standard Book Number (ISBN)
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 2017 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jan 2017