Data-Driven Robust Control of Discrete-Time Uncertain Linear Systems Via Off-Policy Reinforcement Learning


This paper presents a model-free solution to the robust stabilization problem of discrete-time linear dynamical systems with bounded and mismatched uncertainty. An optimal controller design method is derived to solve the robust control problem, which results in solving an algebraic Riccati equation (ARE). It is shown that the optimal controller obtained by solving the ARE can robustly stabilize the uncertain system. To develop a model-free solution to the translated ARE, off-policy reinforcement learning (RL) is employed to solve the problem in hand without the requirement of system dynamics. In addition, the comparisons between on- and off-policy RL methods are presented regarding the robustness to probing noise and the dependence on system dynamics. Finally, a simulation example is carried out to validate the efficacy of the presented off-policy RL approach.


Electrical and Computer Engineering

Research Center/Lab(s)

Center for High Performance Computing Research

Keywords and Phrases

Model-Free; Off-Policy; On-Policy; Reinforcement Learning (RL); Robust Control; System Uncertainty

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Dec 2019