Safe Intermittent Reinforcement Learning for Nonlinear Systems


In this paper, an online intermittent actor-critic reinforcement learning method is used to stabilize nonlinear systems optimally while also guaranteeing safety. A barrier function-based transformation is introduced to ensure that the system does not violate the user-defined safety constraints. It is shown that the safety constraints of the original system can be guaranteed by assuring the stability of the equilibrium point of an appropriately transformed system. Then, an online intermittent actor-critic learning framework is developed to learn the optimal safe intermittent controller. Also, Zeno behavior is guaranteed to be excluded. Finally, numerical examples are conducted to verify the efficacy of the learning algorithm.

Meeting Name

58th IEEE Conference on Decision and Control, CDC 2019 (2019: Dec. 11-13, Nice, France)


Electrical and Computer Engineering


This work was supported in part by the National Natural Science Foundation of China under Grant 61903028 and Grant 61333002, in part by the Fundamental Research Funds for the China Central Universities of USTB under grant No. FRF-TP-18-031A1 and No. FRF-BD-17-002A, in part by the China Post-Doctoral Science Foundation under Grant 2018M641197, in part by the National Science Foundation under grant NSF CAREER CPS-1851588, in part by ONR Minerva under grant No. N00014-18-1-2160, in part by NATO under grant No. SPS G5176, in part by the Mary K. Finley Endowment, in part by the Missouri S&T Intelligent Systems Center, and in part by the Army Research Laboratory under Cooperative Agreement Number W911NF-18-2-0260.

Keywords and Phrases

Intermittent Feedback; Reinforcement Learning.; Safety Control

International Standard Book Number (ISBN)


International Standard Serial Number (ISSN)


Document Type

Article - Conference proceedings

Document Version


File Type





© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Dec 2019