Model-Free Event-Triggered Containment Control of Multi-Agent Systems

Abstract

This paper presents a model-free distributed event-triggered containment control scheme for linear multiagent systems. The proposed event-triggered scheme guarantees asymptotic stability of the equilibrium point of the containment error as well the avoidance of the Zeno behavior. To relax the requirement of complete knowledge of the dynamics, we combine an off-policy reinforcement learning algorithm in an actor critic structure with the event-trigger control mechanism to obtain the feedback gain of the distributed containment control protocol. A simulation experiment is conducted to verify the effectiveness of the approach.

Meeting Name

2018 American Control Conference (2018, Jun. 27-29, Milwaukee, WI)

Department(s)

Electrical and Computer Engineering

Research Center/Lab(s)

Intelligent Systems Center

Second Research Center/Lab

Center for High Performance Computing Research

Comments

This work was supported in part by the Mary K. Finley Missouri Endowment, the Missouri S&T Intelligent Systems Center, the National Science Foundation and the National Natural Science Foundation of China (NSFC Grant No. 61333002) and by NATO under grant No. SPS G5176.

Keywords and Phrases

Actor-critic; Containment control; Event-triggered; Multi-agent systems; Off-policy reinforcement learning

International Standard Book Number (ISBN)

978-1-5386-5428-6

International Standard Serial Number (ISSN)

0743-1619; 2378-5861

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Jun 2018

Share

 
COinS