Reinforcement Learning Based Output-Feedback Control of Nonlinear Nonstrict Feedback Discrete-Time Systems with Application to Engines

Peter Shih
Jonathan B. Vance
Brian C. Kaul
Jagannathan Sarangapani, Missouri University of Science and Technology
J. A. Drallmeier, Missouri University of Science and Technology

This document has been relocated to http://scholarsmine.mst.edu/ele_comeng_facwork/1244

There were 2 downloads as of 27 Jun 2016.

Abstract

A novel reinforcement-learning based output-adaptive neural network (NN) controller, also referred as the adaptive-critic NN controller, is developed to track a desired trajectory for a class of complex nonlinear discrete-time systems in the presence of bounded and unknown disturbances. The controller includes an observer for estimating states and the outputs, critic, and two action NNs for generating virtual, and actual control inputs. The critic approximates certain strategic utility function and the action NNs are used to minimize both the strategic utility function and their outputs. All NN weights adapt online towards minimization of a performance index, utilizing gradient-descent based rule. A Lyapunov function proves the uniformly ultimate boundedness (UUB) of the closed-loop tracking error, weight, and observer estimation. Separation principle and certainty equivalence principles are relaxed; persistency of excitation condition and linear in the unknown parameter assumption is not needed. The performance of this adaptive critic NN controller is evaluated through simulation with the Daw engine model in lean mode. The objective is to reduce the cyclic dispersion in heat release by using the controller.