Adaptive Dynamic Programming in the Hamiltonian-Driven Framework


This chapter presents a Hamiltonian-driven framework of adaptive dynamic programming (ADP) for continuous-time nonlinear systems. Three fundamental problems for solving the optimal control problem are presented, i.e., the evaluation of given admissible policy, the comparison between two different admissible policies with respect to the performance, and the performance improvement of given admissible control. It is shown that the Hamiltonian functional can be viewed as the temporal difference for dynamical systems in continuous time. Therefore, the minimization of the Hamiltonian functional is equivalent to the value function approximation. An iterative algorithm starting from an arbitrary admissible control is presented for the optimal control approximation with its convergence proof. The Hamiltonian-driven ADP algorithm can be implemented using a critic only structure, which is trained to approximate the optimal value gradient. Simulation example is conducted to verify the effectiveness of Hamiltonian-driven ADP.


Electrical and Computer Engineering

Research Center/Lab(s)

Intelligent Systems Center


Part of the Studies in Systems, Decision and Control book series

This work was supported in part by the National Natural Science Foundation of China under Grant No. 61903028, in part by the China Post-Doctoral Science Foundation under Grant 2018M641197, in part by the Fundamental Research Funds for the Central Universities under grant No. FRF-TP-18-031A1 and No. FRF-BD-17-002A, in part by the DARPA/Microsystems Technology Office and in part by the Army Research Laboratory under grant No. W911NF-18-2-0260.

Keywords and Phrases

Adaptive Dynamic Programming; Hamiltonian-Driven Framework; Temporal Difference; Value Gradient Learning

International Standard Serial Number (ISSN)

2198-4182; 2198-4190

Document Type

Book - Chapter

Document Version


File Type





© 2021 Springer, All rights reserved.

Publication Date

01 Jan 2021