In this article, a novel Q-learning scheduling method for the current controller of a switched reluctance motor (SRM) drive is investigated. The Q-learning algorithm is a class of reinforcement learning approaches that can find the best forward-in-time solution of a linear control problem. An augmented system is constructed based on the reference current signal and the SRM model to allow for solving the algebraic Riccati equation of the current-tracking problem. This article introduces a new scheduled-Q-learning algorithm that utilizes a table of Q-cores that lies on the nonlinear surface of an SRM model without involving any information about the model parameters to track the reference current trajectory by scheduling the infinite horizon linear quadratic trackers (LQT) handled by Q-learning algorithms. Additionally, a linear interpolation algorithm is proposed to improve the transition of the LQT between trained Q-cores to ensure a smooth response as state variables evolve on the nonlinear surface of the model. Lastly, simulation and experimental results are provided to validate the effectiveness of the proposed control scheme.
H. Alharkan et al., "Optimal Tracking Current Control of Switched Reluctance Motor Drives using Reinforcement Q-Learning Scheduling," IEEE Access, vol. 9, pp. 9926-9936, Institute of Electrical and Electronics Engineers (IEEE), Jan 2021.
The definitive version is available at https://doi.org/10.1109/ACCESS.2021.3050167
Electrical and Computer Engineering
Keywords and Phrases
Adaptive Dynamic Programming; Current Control; Least Square Methods; Motor Drive; Optimal Control; Reinforcement Learning; Switched Reluctance Motors
International Standard Serial Number (ISSN)
Article - Journal
© 2021 Institute of Electrical and Electronics Engineers (IEEE), All rights reserved.
Creative Commons Licensing
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
08 Jan 2021