Hamiltonian-Driven Adaptive Dynamic Programming based on Extreme Learning Machine

Abstract

In this paper, a novel frame work of reinforcement learning for continuous time dynamical system is presented based on the Hamiltonian functional and extreme learning machine. The idea of solution search in the optimization is introduced to find the optimal control policy in the optimal control problem. The optimal control search consists of three steps: evaluation, comparison and improvement of arbitrary admissible policy. The Hamiltonian functional plays an important role in the above framework, under which only one critic is required in the adaptive critic structure. The critic network is implemented by the extreme learning machine. Finally, simulation study is conducted to verify the effectiveness of the presented algorithm.

Meeting Name

14th International Symposium on Neural Networks, ISNN 2017 (2017: Jun. 21-26, Sapporo, Hakodate, and Muroran, Hokkaido, Japan)

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Research Center/Lab(s)

Intelligent Systems Center

Second Research Center/Lab

Center for High Performance Computing Research

Comments

This work was supported in part by the Mary K. Finley Missouri Endowment, the Missouri S&T Intelligent Systems Center, the National Science Foundation, the National Natural Science Foundation of China (NSFC Grant No. 61333002) and the China Scholarship Council (CSC No. 201406460057).

Keywords and Phrases

Adaptive Dynamic Programming; Extreme Learning Machine; Hamiltonian Functional; Optimization; Reinforcement Learning

International Standard Serial Number (ISSN)

0302-9743

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2017 Springer, All rights reserved.

Publication Date

26 Jun 2017

Share

 
COinS