Abstract

Online optimal control of nonlinear discrete-time systems in a forward-in-time manner is a challenging problem due to a lack of closed-form solution to the Hamilton- Bellman-Jacobi (HJB) equation. Traditionally, value and policy iteration-based approximate optimal control schemes are developed in the literature by assuming that a significant number of iterations can be performed within a sampling interval which is not practical. By contrast, this book chapter introduces a novel online time-based optimal framework for nonlinear discrete-time systems by using both: (a) reinforcement learning and (b) online neural network approximation-based forward-in-time dynamic programming without using iterative methodology. The overall stability proof is provided, and it is shown that the approximated control input converges to the optimal controller over time. Simulation results are provided to validate the proposed approach. © 2013 The Institute of Electrical and Electronics Engineers, Inc.

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Publication Status

Full Access

Keywords and Phrases

OLA-based controller for offline methodology; Online optimal/forward-in-time, challenging; Optimal control of nonaffine, without value/policy; RL/ADP and NNs, HCCI as MIMO nonaffine; TD in engineering, not entailing system dynamics

International Standard Book Number (ISBN)

978-111810420-0

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Wiley, All rights reserved.

Publication Date

07 Feb 2013

Share

 
COinS