Reinforcement Learning Based Output-feedback Controller for Complex Nonlinear Discrete-Time Systems
A novel reinforcement-learning based output-adaptive neural network (NN) controller, also referred as the adaptive-critic NN controller, is developed to track a desired trajectory for a class of complex feedback nonlinear discrete-time systems in the presence of bounded and unknown disturbances. This nonlinear discrete-time system consists of a second order system in nonstrict form and an affine nonlinear discrete-time system tightly coupled together. Two adaptive critic NN controllers are designed - primary one for the nonstrict system and the secondary one for the affine system. A Lyapunov function shows the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight estimates and observer estimates. Separation principle and certainty equivalence principles are relaxed, persistency of excitation condition is not required and linear in the unknown parameter assumption is not needed. The performance of this controller is evaluated on a spark ignition (SI) engine operating with high exhaust gas recirculation (EGR) levels where the objective is to reduce cyclic dispersion in heat release.
P. Shih and J. Sarangapani, "Reinforcement Learning Based Output-feedback Controller for Complex Nonlinear Discrete-Time Systems," Proceedings of the IEEE 22nd International Symposium on Intelligent Control (2007, Singapore), Institute of Electrical and Electronics Engineers (IEEE), Jan 2007.
The definitive version is available at https://doi.org/10.1109/ISIC.2007.4450920
IEEE 22nd International Symposium on Intelligent Control (2007: Oct. 1-3, Singapore)
Electrical and Computer Engineering
National Science Foundation (U.S.)
Keywords and Phrases
Control System Synthesis; Discrete Time Systems; Lyapunov Methods; Nonlinear Control Systems
Article - Conference proceedings
© 2007 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jan 2007