Optimal Adaptive Control of Partially Uncertain Linear Continuous-Time Systems with State Delay
In this chapter, the optimal adaptive control (OAC) of partially unknown linear continuous-time systems with state delay is introduced by using integral reinforcement learning (IRL). A quadratic cost function over infinite time horizon is considered and a value function is defined by considering the delayed state. A delay algebraic Riccati equation (DARE) is obtained using the value functional and utilized along with stationarity condition to obtain optimal control policy. Subsequently, it has been shown that the optimal control policy renders the closed-loop system asymptotically stable under mild conditions when the dynamics are known. Then, to overcome the need for drift dynamics, an actor-critic framework using the IRL approach is introduced for OAC. A novel update law for tuning the parameters of the critic function is derived. Lyapunov theory is employed to demonstrate the boundedness of the closed-loop system when the OAC uses periodic and event sampled feedback. Finally, future work involving an image sensor as part of the OAC loop using deep neural network based RL is reported. A simulation example is included to illustrate the effectiveness of the proposed approach.
R. Moghadam et al., "Optimal Adaptive Control of Partially Uncertain Linear Continuous-Time Systems with State Delay," Handbook of Reinforcement Learning and Control, vol. 325, pp. 243 - 272, Springer, Jun 2021.
The definitive version is available at https://doi.org/10.1007/978-3-030-60990-0_9
Electrical and Computer Engineering
Intelligent Systems Center
International Standard Serial Number (ISSN)
Book - Chapter
© 2021 Springer Nature, All rights reserved.
24 Jun 2021