Adaptive Suboptimal Output-Feedback Control for Linear Systems using Integral Reinforcement Learning


Reinforcement learning (RL) techniques have been successfully used to find optimal state-feedback controllers for continuous-time (CT) systems. However, in most real-world control applications, it is not practical to measure the system states and it is desirable to design output-feedback controllers. This paper develops an online learning algorithm based on the integral RL (IRL) technique to find a suboptimal output-feedback controller for partially unknown CT linear systems. The proposed IRL-based algorithm solves an IRL Bellman equation in each iteration online in real time to evaluate an output-feedback policy and updates the output-feedback gain using the information given by the evaluated policy. The knowledge of the system drift dynamics is not required by the proposed method. An adaptive observer is used to provide the knowledge of the full states for the IRL Bellman equation during learning. However, the observer is not needed after the learning process is finished. The convergence of the proposed algorithm to a suboptimal output-feedback solution and the performance of the proposed method are verified through simulation on two real-world applications, namely, the X-Y table and the F-16 aircraft.


Electrical and Computer Engineering

Keywords and Phrases

Aircraft Control; Continuous Time Systems; Controllers; Dynamic Programming; Feedback Control; Iterative Methods; Linear Systems; Nonlinear Control Systems; Online Systems; Reinforcement Learning; Social Networking (online); State Feedback; Integral Reinforcement Learning (IRL); Linear Continuous-Time; Online Learning Algorithms; Optimal Controls; Output Feedback; Output Feedback Controls; Output-Feedback Controllers; Reinforcement Learning Techniques; Adaptive Control Systems; Linear Continuous-Time (CT) Systems; Optimal Control

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2015 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jan 2015