Optimal Model-Free Output Synchronization of Heterogeneous Systems using Off-Policy Reinforcement Learning
Abstract
This paper considers optimal output synchronization of heterogeneous linear multi-agent systems. Standard approaches to output synchronization of heterogeneous systems require either the solution of the output regulator equations or the incorporation of a p-copy of the leader's dynamics in the controller of each agent. By contrast, in this paper neither one is needed. Moreover, here both the leader's and the follower's dynamics are assumed to be unknown. First, a distributed adaptive observer is designed to estimate the leader's state for each agent. The output synchronization problem is then formulated as an optimal control problem and a novel model-free off-policy reinforcement learning algorithm is developed to solve the optimal output synchronization problem online in real time. It is shown that this optimal distributed approach implicitly solves the output regulation equations without actually doing so. Simulation results are provided to verify the effectiveness of the proposed approach.
Recommended Citation
H. Modares et al., "Optimal Model-Free Output Synchronization of Heterogeneous Systems using Off-Policy Reinforcement Learning," Automatica, vol. 71, pp. 334 - 341, Elsevier, Sep 2016.
The definitive version is available at https://doi.org/10.1016/j.automatica.2016.05.017
Department(s)
Electrical and Computer Engineering
Keywords and Phrases
Learning Algorithms; Optimal Control Systems; Reinforcement Learning; Synchronization; Adaptive Observer; Distributed Approaches; Heterogeneous Systems; Leader-Follower; Optimal Control Problem; Output Regulation; Output Synchronization; Regulator Equations; Multi Agent Systems; Leader-Follower Systems
International Standard Serial Number (ISSN)
0005-1098
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2016 Elsevier, All rights reserved.
Publication Date
01 Sep 2016