Off-Policy Reinforcement Learning for Synchronization in Multiagent Graphical Games


This paper develops an off-policy reinforcement learning (RL) algorithm to solve optimal synchronization of multiagent systems. This is accomplished by using the framework of graphical games. In contrast to traditional control protocols, which require complete knowledge of agent dynamics, the proposed off-policy RL algorithm is a model-free approach, in that it solves the optimal synchronization problem without knowing any knowledge of the agent dynamics. A prescribed control policy, called behavior policy, is applied to each agent to generate and collect data for learning. An off-policy Bellman equation is derived for each agent to learn the value function for the policy under evaluation, called target policy, and find an improved policy, simultaneously. Actor and critic neural networks along with least-square approach are employed to approximate target control policies and value functions using the data generated by applying prescribed behavior policies. Finally, an off-policy RL algorithm is presented that is implemented in real time and gives the approximate optimal control policy for each agent using only measured data. It is shown that the optimal distributed policies found by the proposed algorithm satisfy the global Nash equilibrium and synchronize all agents to the leader. Simulation results illustrate the effectiveness of the proposed method.


Electrical and Computer Engineering

Keywords and Phrases

Dynamic Programming; Multi Agent Systems; Software Agents; Synchronization; Bellman Equations; Control Protocols; Distributed Policies; Graphical Games; Least Square Approach; Nash Equilibria; Optimal Control Policy; Optimal Synchronization; Reinforcement Learning; Graphical Game; Multiagent Systems (MAS); Neural Network (NN); Reinforcement Learning (RL)

International Standard Serial Number (ISSN)


Document Type

Article - Journal

Document Version


File Type





© 2017 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Oct 2017