Leader-Follower Output Synchronization of Linear Heterogeneous Systems With Active Leader using Reinforcement Learning


This paper develops optimal control protocols for the distributed output synchronization problem of leader-follower multiagent systems with an active leader. Agents are assumed to be heterogeneous with different dynamics and dimensions. The desired trajectory is assumed to be preplanned and is generated by the leader. Other follower agents autonomously synchronize to the leader by interacting with each other using a communication network. The leader is assumed to be active in the sense that it has a nonzero control input so that it can act independently and update its control to keep the followers away from possible danger. A distributed observer is first designed to estimate the leader's state and generate the reference signal for each follower. Then, the output synchronization of leader-follower systems with an active leader is formulated as a distributed optimal tracking problem, and inhomogeneous algebraic Riccati equations (AREs) are derived to solve it. The resulting distributed optimal control protocols not only minimize the steady-state error but also optimize the transient response of the agents. An off-policy reinforcement learning algorithm is developed to solve the inhomogeneous AREs online in real time and without requiring any knowledge of the agents' dynamics. Finally, two simulation examples are conducted to illustrate the effectiveness of the proposed algorithm.


Electrical and Computer Engineering

Second Department

Computer Science

Research Center/Lab(s)

Intelligent Systems Center

Second Research Center/Lab

Center for High Performance Computing Research


Early Access

Keywords and Phrases

Algebra; Autonomous Agents; Heuristic Algorithms; Multi Agent Systems; Network Protocols; Optimization; Reinforcement Learning; Riccati Equations; Synchronization; Trajectories; Transient Analysis; Active Leaders; Algebraic Riccati Equations; Heterogeneous Systems; Learning (artificial Intelligence); Nonhomogeneous Media; Observers; Output Synchronization; Unknown Follower; Learning Algorithms; Active Leader; Heterogeneous System; Inhomogeneous Algebraic Riccati Equations (AREs); Protocols; Reinforcement Learning (RL); Trajectory; Unknown Follower

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Mar 2018