Model-Free Event-Triggered Containment Control of Multi-Agent Systems
This paper presents a model-free distributed event-triggered containment control scheme for linear multiagent systems. The proposed event-triggered scheme guarantees asymptotic stability of the equilibrium point of the containment error as well the avoidance of the Zeno behavior. To relax the requirement of complete knowledge of the dynamics, we combine an off-policy reinforcement learning algorithm in an actor critic structure with the event-trigger control mechanism to obtain the feedback gain of the distributed containment control protocol. A simulation experiment is conducted to verify the effectiveness of the approach.
Y. Yang et al., "Model-Free Event-Triggered Containment Control of Multi-Agent Systems," Proceedings of the 2018 American Control Conference (2018, Milwaukee, Wisconsin), pp. 877 - 884, Institute of Electrical and Electronics Engineers (IEEE), Jun 2018.
The definitive version is available at https://doi.org/10.23919/ACC.2018.8430818
2018 American Control Conference (2018, Jun. 27-29, Milwaukee, WI)
Electrical and Computer Engineering
Intelligent Systems Center
Second Research Center/Lab
Center for High Performance Computing Research
Keywords and Phrases
Actor-critic; Containment control; Event-triggered; Multi-agent systems; Off-policy reinforcement learning
International Standard Book Number (ISBN)
International Standard Serial Number (ISSN)
Article - Conference proceedings
© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
01 Jun 2018