Hamiltonian-Driven Adaptive Dynamic Programming with Approximation Errors
In this article, we consider an iterative adaptive dynamic programming (ADP) algorithm within the Hamiltonian-driven framework to solve the Hamilton-Jacobi-Bellman (HJB) equation for the infinite-horizon optimal control problem in continuous time for nonlinear systems. First, a novel function, ``min-Hamiltonian,'' is defined to capture the fundamental properties of the classical Hamiltonian. It is shown that both the HJB equation and the policy iteration (PI) algorithm can be formulated in terms of the min-Hamiltonian within the Hamiltonian-driven framework. Moreover, we develop an iterative ADP algorithm that takes into consideration the approximation errors during the policy evaluation step. We then derive a sufficient condition on the iterative value gradient to guarantee closed-loop stability of the equilibrium point as well as convergence to the optimal value. A model-free extension based on an off-policy reinforcement learning (RL) technique is also provided. Finally, numerical results illustrate the efficacy of the proposed framework.
Y. Yang et al., "Hamiltonian-Driven Adaptive Dynamic Programming with Approximation Errors," IEEE Transactions on Cybernetics, Institute of Electrical and Electronics Engineers (IEEE), Sep 2021.
The definitive version is available at https://doi.org/10.1109/TCYB.2021.3108034
Electrical and Computer Engineering
Intelligent Systems Center
Keywords and Phrases
Approximation Algorithms; Approximation Error; Costs; Dynamic Programming; Hamilton-Jacobi-Bellman (HJB) Equation; Hamiltonian-Driven Framework; Inexact Adaptive Dynamic Programming (ADP); Iterative Algorithms; Mathematical Model; Optimal Control; Stability Analysis
International Standard Serial Number (ISSN)
Article - Journal
© 2021 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
08 Sep 2021