Disturbance Rejection of Multi-Agent Systems: A Reinforcement Learning Differential Game Approach
Abstract
Distributed tracking control of multi-agent linear systems in the presence of disturbances is considered in this paper. The given problem is first formulated into a multi-player zero-sum differential graphical game. It is shown that the solution to this problem requires solving the coupled Hamilton-Jacobi-Isaacs (HJI) equations. A multi-agent reinforcement learning algorithm is developed to find the solution to these coupled HJI equations. The convergence of this algorithm to the optimal solution is proven. It is also shown that the proposed method guarantees L2-bounded synchronization errors in the presence of dynamical disturbances.
Recommended Citation
Q. Jiao et al., "Disturbance Rejection of Multi-Agent Systems: A Reinforcement Learning Differential Game Approach," Proceedings of the American Control Conference (2015, Chicago, IL), pp. 737 - 742, Institute of Electrical and Electronics Engineers (IEEE), Jul 2015.
The definitive version is available at https://doi.org/10.1109/ACC.2015.7170822
Meeting Name
American Control Conference (ACC) (2015: Jul. 1-3, Chicago, IL)
Department(s)
Electrical and Computer Engineering
Keywords and Phrases
Disturbance Rejection; Learning Algorithms; Linear Systems; Reinforcement Learning; Differential Games; Distributed Tracking; Graphical Games; Hamilton-Jacobi-Isaacs; Multi Agent; Multi-Agent Reinforcement Learning; Optimal Solutions; Synchronization Error; Multi Agent Systems
International Standard Book Number (ISBN)
978-1479986842
International Standard Serial Number (ISSN)
0743-1619
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2015 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Publication Date
01 Jul 2015