Distributed Control of Leader-Follower Systems under Adversarial Inputs using Reinforcement Learning

Abstract

In this paper, a model-free reinforcement learning (RL) based distributed control protocol for leader-follower multi-agent systems is presented. Despite successful utilization of RL for learning optimal control protocols in multi-agent systems, the effects of the adversarial inputs are neglected in existing results. The susceptibility of the standard synchronization control protocol against adversarial inputs is shown. Then, a RL-based distributed control framework is developed for multi-agent systems to stop corrupted data of a compromised agent from propagating across the network. To this end, only the leader communicates its actual sensory information and other agents estimate the leader state using a distributed observer and communicate this estimation to their neighbors to reach consensus on the leader state. The observer cannot be physically changed by any adversarial input. Therefore, it guarantees that all intact agents synchronize to the leader trajectory except compromised agent. A distributed control protocol is used to further enhance the resiliency by attenuating the effect of the adversarial inputs on the compromised agent itself. An off-policy RL algorithm is developed to solve the output synchronization control problem online and using only measured data along the system trajectories.

Meeting Name

2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 (2017: Nov. 27-Dec. 1, Honolulu, HI)

Department(s)

Electrical and Computer Engineering

Keywords and Phrases

Control; Distributed Control; Multi-Agent System; Optimal Control; Reinforcement Learning

International Standard Book Number (ISBN)

978-153862725-9

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2018 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.

Publication Date

01 Feb 2018

Share

 
COinS