Online Model-Free N-Step HDP with Stability Analysis


Because of a powerful temporal-difference (TD) with λ [TD(λ)] learning method, this paper presents a novel n-step adaptive dynamic programming (ADP) architecture that combines TD(λ) with regular TD learning for solving optimal control problems with reduced iterations. In contrast with a backward view learning of TD(λ) that is required an extra parameter named eligibility traces to update at the end of each episode (offline training), the new design in this paper has forward view learning, which is updated at each time step (online training) without needing the eligibility trace parameter in various applications without mathematical models. Therefore, the new design is called the online model-free n-step action-dependent (AD) heuristic dynamic programming [NSHDP(λ)]. NSHDP(λ) has three neural networks: the critic network (CN) with regular one-step TD [TD(0)], the CN with n-step TD learning [or TD(λ)], and the actor network (AN). Because the forward view learning does not require any extra eligibility traces associated with each state, the NSHDP(λ) architecture has low computational costs and is memory efficient. Furthermore, the stability is proven for NSHDP(λ) under certain conditions by using Lyapunov analysis to obtain the uniformly ultimately bounded (UUB) property. We compare the results with the performance of HDP and traditional action-dependent HDP(λ) [ADHDP(λ)] with different λ values. Moreover, a complex nonlinear system and 2-D maze problem are two simulation benchmarks in this paper, and the third one is an inverted pendulum simulation benchmark, which is presented in the supplemental material part of this paper. NSHDP(λ) performance is examined and compared with other ADP methods.


Electrical and Computer Engineering

Research Center/Lab(s)

Intelligent Systems Center

Second Research Center/Lab

Center for High Performance Computing Research


This work was supported in part by the Missouri University of Science and Technology Intelligent Systems Center, in part by the Mary K. Finley Missouri Endowment, in part by the National Science Foundation, in part by the Lifelong Learning Machines program from DARPA/Microsystems Technology Office, in part by the Army Research Laboratory (ARL) under Contract W911NF-18-2-0260.

Keywords and Phrases

λ-Return; Action-Dependent (AD) Heuristic Dynamic Programming (ADHDP); Adaptive Dynamic Programming (ADP); Lyapunov Stability; Uniformly Ultimately Bounded (UUB)

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2019 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Apr 2020