Abstract
Actor-critic reinforcement learning algorithms have shown to be a successful tool in learning the optimal control for a range of (repetitive) tasks on systems with (partially) unknown dynamics, which may or may not be nonlinear. Most of the reinforcement learning literature published up to this point only deals with modeling the task at hand as a Markov decision process with an infinite horizon cost function. In practice, however, it is sometimes desired to have a solution for the case where the cost function is defined over a finite horizon, which means that the optimal control problem will be time-varying and thus harder to solve. This paper adapts two previously introduced actor-critic algorithms from the infinite horizon setting to the finite horizon setting and applies them to learning a task on a nonlinear system, without needing any assumptions or knowledge about the system dynamics, using radial basis function networks. Simulations on a typical nonlinear motion control problem are carried out, showing that actor-critic algorithms are capable of solving the difficult problem of time-varying optimal control. Moreover, the benefit of using a model learning technique is shown. © 2013 IEEE.
Recommended Citation
I. Grondman et al., "Solutions to Finite Horizon Cost Problems using Actor-critic Reinforcement Learning," Proceedings of the International Joint Conference on Neural Networks, article no. 6706755, Institute of Electrical and Electronics Engineers, Dec 2013.
The definitive version is available at https://doi.org/10.1109/IJCNN.2013.6706755
Department(s)
Electrical and Computer Engineering
Second Department
Computer Science
International Standard Book Number (ISBN)
978-146736129-3
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.
Publication Date
01 Dec 2013