Safe Intermittent Reinforcement Learning for Nonlinear Systems
In this paper, an online intermittent actor-critic reinforcement learning method is used to stabilize nonlinear systems optimally while also guaranteeing safety. A barrier function-based transformation is introduced to ensure that the system does not violate the user-defined safety constraints. It is shown that the safety constraints of the original system can be guaranteed by assuring the stability of the equilibrium point of an appropriately transformed system. Then, an online intermittent actor-critic learning framework is developed to learn the optimal safe intermittent controller. Also, Zeno behavior is guaranteed to be excluded. Finally, numerical examples are conducted to verify the efficacy of the learning algorithm.
Y. Yang et al., "Safe Intermittent Reinforcement Learning for Nonlinear Systems," Proceedings of the 58th IEEE Conference on Decision and Control (2019, Nice, France), pp. 690 - 697, Institute of Electrical and Electronics Engineers (IEEE), Dec 2019.
The definitive version is available at https://doi.org/10.1109/CDC40024.2019.9030210
58th IEEE Conference on Decision and Control, CDC 2019 (2019: Dec. 11-13, Nice, France)
Electrical and Computer Engineering
Center for High Performance Computing Research
Keywords and Phrases
Intermittent Feedback; Reinforcement Learning.; Safety Control
International Standard Book Number (ISBN)
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Dec 2019