Optimal Adaptive Control of Partially Uncertain Linear Continuous-Time Systems with State Delay


In this chapter, the optimal adaptive control (OAC) of partially unknown linear continuous-time systems with state delay is introduced by using integral reinforcement learning (IRL). A quadratic cost function over infinite time horizon is considered and a value function is defined by considering the delayed state. A delay algebraic Riccati equation (DARE) is obtained using the value functional and utilized along with stationarity condition to obtain optimal control policy. Subsequently, it has been shown that the optimal control policy renders the closed-loop system asymptotically stable under mild conditions when the dynamics are known. Then, to overcome the need for drift dynamics, an actor-critic framework using the IRL approach is introduced for OAC. A novel update law for tuning the parameters of the critic function is derived. Lyapunov theory is employed to demonstrate the boundedness of the closed-loop system when the OAC uses periodic and event sampled feedback. Finally, future work involving an image sensor as part of the OAC loop using deep neural network based RL is reported. A simulation example is included to illustrate the effectiveness of the proposed approach.


Electrical and Computer Engineering

Research Center/Lab(s)

Intelligent Systems Center


This work was supported in part by Intelligent Systems Center, National Science Foundation under grants ECCS 1406533 and CMMI 1547042.

International Standard Serial Number (ISSN)

2198-4190; 2198-4182

Document Type

Book - Chapter

Document Version


File Type





© 2021 Springer Nature, All rights reserved.

Publication Date

24 Jun 2021