Safe Intermittent Reinforcement Learning with Static and Dynamic Event Generators
In this article, we present an intermittent framework for safe reinforcement learning (RL) algorithms. First, we develop a barrier function-based system transformation to impose state constraints while converting the original problem to an unconstrained optimization problem. Second, based on optimal derived policies, two types of intermittent feedback RL algorithms are presented, namely, a static and a dynamic one. We finally leverage an actor/critic structure to solve the problem online while guaranteeing optimality, stability, and safety. Simulation results show the efficacy of the proposed approach.
Y. Yang et al., "Safe Intermittent Reinforcement Learning with Static and Dynamic Event Generators," IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 12, pp. 5441-5455, Institute of Electrical and Electronics Engineers (IEEE), Feb 2020.
The definitive version is available at https://doi.org/10.1109/TNNLS.2020.2967871
Electrical and Computer Engineering
Center for High Performance Computing Research
Keywords and Phrases
Actor/Critic Structures; Asymptotic Stability; Barrier Functions; Reinforcement Learning (RL); Safety-Critical Systems
International Standard Serial Number (ISSN)
Article - Journal
© 2020 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
10 Feb 2020