Integral Reinforcement Learning and Experience Replay for Adaptive Optimal Control of Partially-Unknown Constrained-Input Continuous-Time Systems
Abstract
In this paper, an integral reinforcement learning (IRL) algorithm on an actor-critic structure is developed to learn online the solution to the Hamilton-Jacobi-Bellman equation for partially-unknown constrained-input systems. The technique of experience replay is used to update the critic weights to solve an IRL Bellman equation. This means, unlike existing reinforcement learning algorithms, recorded past experiences are used concurrently with current data for adaptation of the critic weights. It is shown that using this technique, instead of the traditional persistence of excitation condition which is often difficult or impossible to verify online, an easy-to-check condition on the richness of the recorded data is sufficient to guarantee convergence to a near-optimal control law. Stability of the proposed feedback control law is shown and the effectiveness of the proposed method is illustrated with simulation examples.
Recommended Citation
H. Modares et al., "Integral Reinforcement Learning and Experience Replay for Adaptive Optimal Control of Partially-Unknown Constrained-Input Continuous-Time Systems," Automatica, vol. 50, no. 1, pp. 193 - 202, Elsevier, Jan 2014.
The definitive version is available at https://doi.org/10.1016/j.automatica.2013.09.043
Department(s)
Electrical and Computer Engineering
Keywords and Phrases
Adaptive Optimal Control; Experience Replay; Feedback Control Law; Hamilton Jacobi Bellman Equation; Input Constraints; Near-Optimal Control; Optimal Controls; Persistence of Excitation; Control; Control Theory; Dynamic Programming; Neural Networks; Online Systems; Reinforcement Learning; Optimal Control Systems; Integral Reinforcement Learning; Optimal Control
International Standard Serial Number (ISSN)
0005-1098
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2014 Elsevier, All rights reserved.
Publication Date
01 Jan 2014