Decision Theory on Dynamic Domains Nabla Derivatives and the Hamilton-Jacobi-Bellman Equation
This document has been relocated to http://scholarsmine.mst.edu/ele_comeng_facwork/1527
There were 7 downloads as of 28 Jun 2016.
The time scales calculus, which includes the study of the nabla derivatives, is an emerging key topic due to many multidisciplinary applications. We extend this calculus to Approximate Dynamic Programming. In particular, we investigate application of the nabla derivative, one of the fundamental dynamic derivatives of time scales. We present a nabla-derivative based derivation and proof of the Hamilton-Jacobi-Bellman equation, the solution of which is the fundamental problem in the field of dynamic programming. By drawing together the calculus of time scales and the applied area of stochastic control via Approximate Dynamic Programming, we connect two major fields of research.