Abstract
This paper presents a data-driven method based on off-policy integral reinforcement learning to solve the semi-global output regulation of continuous-time linear systems with input saturation. A family of state feedback laws for the input constrained output regulation problem is designed based on solving an algebraic Riccati equation. In contrast to the existing methods, complete knowledge of the system dynamics is no longer required in this paper. Instead, the data collected from online implementation is efficiently utilized to design the controller. Therefore, the controller design in this paper is data driven. It is shown that the presented method can find feedback control inputs with constraint of amplitude saturation and stabilize a given linear system with all its poles inside or on the imaginary axis. Finally, a simulation example is conducted to show the validity of the presented approach to solve the semi-global output regulation of continuous-time linear systems with input saturation.
Recommended Citation
Y. Yang et al., "Off-Policy Integral Reinforcement Learning for Semi-Global Constrained Output Regulation of Continuous-Time Linear Systems," Proceedings of the International Joint Conference on Neural Networks, article no. 8489343, Institute of Electrical and Electronics Engineers, Oct 2018.
The definitive version is available at https://doi.org/10.1109/IJCNN.2018.8489343
Department(s)
Electrical and Computer Engineering
Second Department
Computer Science
Keywords and Phrases
Algebraic Riccati equation; input saturation; model-free; output regulation; reinforcement learning
International Standard Book Number (ISBN)
978-150906014-6
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.
Publication Date
10 Oct 2018
Comments
National Science Foundation, Grant 61333002