Data-Driven Robust Control of Discrete-Time Uncertain Linear Systems Via Off-Policy Reinforcement Learning
This paper presents a model-free solution to the robust stabilization problem of discrete-time linear dynamical systems with bounded and mismatched uncertainty. An optimal controller design method is derived to solve the robust control problem, which results in solving an algebraic Riccati equation (ARE). It is shown that the optimal controller obtained by solving the ARE can robustly stabilize the uncertain system. To develop a model-free solution to the translated ARE, off-policy reinforcement learning (RL) is employed to solve the problem in hand without the requirement of system dynamics. In addition, the comparisons between on- and off-policy RL methods are presented regarding the robustness to probing noise and the dependence on system dynamics. Finally, a simulation example is carried out to validate the efficacy of the presented off-policy RL approach.
Y. Yang et al., "Data-Driven Robust Control of Discrete-Time Uncertain Linear Systems Via Off-Policy Reinforcement Learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 12, pp. 3735 - 3747, Institute of Electrical and Electronics Engineers (IEEE), Dec 2019.
The definitive version is available at https://doi.org/10.1109/TNNLS.2019.2897814
Electrical and Computer Engineering
Center for High Performance Computing Research
Keywords and Phrases
Model-Free; Off-Policy; On-Policy; Reinforcement Learning (RL); Robust Control; System Uncertainty
International Standard Serial Number (ISSN)
Article - Journal
© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Dec 2019