Safe Intermittent Reinforcement Learning for Nonlinear Systems
Abstract
In this paper, an online intermittent actor-critic reinforcement learning method is used to stabilize nonlinear systems optimally while also guaranteeing safety. A barrier function-based transformation is introduced to ensure that the system does not violate the user-defined safety constraints. It is shown that the safety constraints of the original system can be guaranteed by assuring the stability of the equilibrium point of an appropriately transformed system. Then, an online intermittent actor-critic learning framework is developed to learn the optimal safe intermittent controller. Also, Zeno behavior is guaranteed to be excluded. Finally, numerical examples are conducted to verify the efficacy of the learning algorithm.
Recommended Citation
Y. Yang et al., "Safe Intermittent Reinforcement Learning for Nonlinear Systems," Proceedings of the 58th IEEE Conference on Decision and Control (2019, Nice, France), pp. 690 - 697, Institute of Electrical and Electronics Engineers (IEEE), Dec 2019.
The definitive version is available at https://doi.org/10.1109/CDC40024.2019.9030210
Meeting Name
58th IEEE Conference on Decision and Control, CDC 2019 (2019: Dec. 11-13, Nice, France)
Department(s)
Electrical and Computer Engineering
Research Center/Lab(s)
Center for High Performance Computing Research
Keywords and Phrases
Intermittent Feedback; Reinforcement Learning.; Safety Control
International Standard Book Number (ISBN)
978-172811398-2
International Standard Serial Number (ISSN)
0743-1546
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Publication Date
01 Dec 2019
Comments
This work was supported in part by the National Natural Science Foundation of China under Grant 61903028 and Grant 61333002, in part by the Fundamental Research Funds for the China Central Universities of USTB under grant No. FRF-TP-18-031A1 and No. FRF-BD-17-002A, in part by the China Post-Doctoral Science Foundation under Grant 2018M641197, in part by the National Science Foundation under grant NSF CAREER CPS-1851588, in part by ONR Minerva under grant No. N00014-18-1-2160, in part by NATO under grant No. SPS G5176, in part by the Mary K. Finley Endowment, in part by the Missouri S&T Intelligent Systems Center, and in part by the Army Research Laboratory under Cooperative Agreement Number W911NF-18-2-0260.