Abstract

This study provides a novel reinforcement learning-based optimal tracking control of partially uncertain nonlinear discrete-time (DT) systems with state constraints using zero-sum game (ZSG) formulation. To address optimal tracking, a novel augmented system consisting of tracking error and its integral value, along with an uncertain desired trajectory, is constructed. A barrier function (BF) with a tradeoff factor is incorporated into the cost function to keep the state trajectories to remain within a compact set and to balance safety with optimality. Next, by using the modified value functional, the ZSG formulation is introduced wherein an actor–critic neural network (NN) framework is employed to approximate the value functional, optimal control input, and worst disturbance. The critic NN weights are tuned once at the sample instants and then iteratively within sampling instants. Using control input errors, the actor NN weights are adjusted once a sampling instant. The concurrent learning term in the critic weight tuning law overcomes the need for the persistency excitation (PE) condition. Further, a weight consolidation scheme is incorporated into the critic update law to attain lifelong learning by overcoming catastrophic forgetting. Finally, a numerical example supports the analytical claims.

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Publication Status

Early Access

Keywords and Phrases

Artificial neural networks; Barrier Lyapunov function; Costs; experience replay; hybrid learning; lifelong learning; Nonlinear dynamical systems; optimal tracking control; Safety; safety; Steady-state; Task analysis; Trajectory; zero-sum game (ZSG)

International Standard Serial Number (ISSN)

2168-2232; 2168-2216

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2023

Share

 
COinS