Hamiltonian-Driven Adaptive Dynamic Programming Based on Extreme Learning Machine
In this paper, a novel frame work of reinforcement learning for continuous time dynamical system is presented based on the Hamiltonian functional and extreme learning machine. The idea of solution search in the optimization is introduced to find the optimal control policy in the optimal control problem. The optimal control search consists of three steps: evaluation, comparison and improvement of arbitrary admissible policy. The Hamiltonian functional plays an important role in the above framework, under which only one critic is required in the adaptive critic structure. The critic network is implemented by the extreme learning machine. Finally, simulation study is conducted to verify the effectiveness of the presented algorithm.
Y. Yang et al., "Hamiltonian-Driven Adaptive Dynamic Programming Based on Extreme Learning Machine," Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10261 LNCS, pp. 197-205, Springer, Jun 2017.
The definitive version is available at https://doi.org/10.1007/978-3-319-59072-1_24
14th International Symposium on Neural Networks, ISNN 2017 (2017: Jun. 21-26, Sapporo, Hakodate, and Muroran, Hokkaido, Japan)
Electrical and Computer Engineering
Intelligent Systems Center
China Scholarship Council
National Science Foundation (U.S.)
National Natural Science Foundation of China
Keywords and Phrases
Adaptive Dynamic Programming; Extreme Learning Machine; Hamiltonian Functional; Optimization; Reinforcement Learning
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 2017 Springer, All rights reserved.