Abstract

Discrete time approximate dynamic programming (ADP) techniques have been widely used in the recent literature to determine the optimal or near optimal control policies for nonlinear systems. However, an inherent assumption of ADP requires at least partial knowledge of the system dynamics as well as the value of the controlled plant one step ahead. in this work, a novel approach to ADP is attempted while relaxing the need of the partial knowledge of the nonlinear system. the proposed methodology entails a two-part process: online system identification and offline optimal control training. First, in the identification process, a neural network (NN) is tuned online to learn the complete plant dynamics and local asymptotic stability is shown under a mild assumption that the NN functional reconstruction errors lie within a small-gain type norm bounded conic sector. Then, using only the NN system model, offline ADP is attempted resulting in a novel optimal control law. the proposed scheme does not require explicit knowledge of the system dynamics as only the learned NN model is needed. Proof of convergence is demonstrated. Simulation results verify theoretical conjecture. ©2009 IEEE.

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Keywords and Phrases

Heuristic dynamic programming; Neural network; Nonlinear optimal control; System identification

International Standard Book Number (ISBN)

978-142443553-1

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

18 Nov 2009

Share

 
COinS