Abstract

In the context of Quadratically Constrained Quadratic Programming (QCQP) with dynamic parameters, the effectiveness of various optimization approaches is heavily influenced by the quality of the initial guess. To address this challenge, this paper proposes a novel approach that leverages reinforcement learning (RL) to generate high-performing initial guesses for iterative algorithms, with the dynamic parameters serving as inputs. Our approach aims to accelerate convergence and improve the objective value, thereby enabling efficient and effective solutions to the QCQP problem under variability. In this study, we evaluate the proposed approach by applying it to an iterative algorithm, specifically the Iterative Rank Minimization (IRM) algorithm. Our empirical evaluations demonstrate the efficacy of the proposed approach in solving QCQP problems with dynamic parameters. The RL-guided IRM algorithm yields high-quality solutions, as evidenced by significantly improved optimality and faster convergence when compared to the original IRM algorithm.

Department(s)

Mechanical and Aerospace Engineering

Comments

National Science Foundation, Grant CPS-2201568

International Standard Serial Number (ISSN)

2576-2370; 0743-1546

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2023

Share

 
COinS