Abstract

The time scales calculus, which includes the study of the nabla derivatives, is an emerging key topic due to many multidisciplinary applications. We extend this calculus to Approximate Dynamic Programming. In particular, we investigate application of the nabla derivative, one of the fundamental dynamic derivatives of time scales. We present a nabla-derivative based derivation and proof of the Hamilton-Jacobi-Bellman equation, the solution of which is the fundamental problem in the field of dynamic programming. By drawing together the calculus of time scales and the applied area of stochastic control via Approximate Dynamic Programming, we connect two major fields of research.

Meeting Name

IEEE International Conference on Systems, Man and Cybernetics, 2008. SMC 2008

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Sponsor(s)

National Science Foundation (U.S.)

Keywords and Phrases

Hamilton-Jacobi-Bellman Equation; Approximate Dynamic Programming; Reinforcement Learning; Time Scales

Document Type

Article - Conference proceedings

Document Version

Final Version

File Type

text

Language(s)

English

Rights

© 2008 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Oct 2008

Share

 
COinS