Data-Driven Optimal Control with Reduced Output Measurements

Abstract

This paper uses the integral reinforcement learning (IRL) technique to develop an online learning algorithm for finding suboptimal static output-feedback controllers for partially-unknown continuous-time (CT) linear systems. To our knowledge, this is the first static output-feedback control design method based on reinforcement learning for CT systems. In the proposed method, an online policy iteration (PI) algorithm is developed which uses the integral reinforcement knowledge for learning a suboptimal static output-feedback solution without requiring the drift knowledge of the system dynamics. Specifically, in the policy evaluation step of the PI algorithm, an IRL Bellman equation is used to evaluate an output-feedback policy, and in the policy improvement step of the PI algorithm the output-feedback gain is updated using the information given by the evaluated policy. An adaptive observer is used to provide the knowledge of the full states for the IRL Bellman equation during learning. However, the observer is not needed after the learning process is finished. The convergence of the proposed algorithm to a suboptimal output-feedback solution and the performance of the proposed method are verified through simulations.

Meeting Name

11th World Congress on Intelligent Control and Automation (WCICA) (2015: Jun. 29 - Jul. 4, Shenyang, China)

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Integral Reinforcement Learning; Optimal Control; Output-Feedback Control

International Standard Book Number (ISBN)

978-1479958252

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2015 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Mar 2015

Share

 
COinS