Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems

From MaRDI portal
Publication:5093264

DOI10.1137/20M1371063zbMATH Open1496.90068arXiv1812.00613OpenAlexW2903451074MaRDI QIDQ5093264FDOQ5093264


Authors: Yujie Tang, Emiliano Dall'Anese, Andrey Bernstein, Steven H. Low Edit this on Wikidata


Publication date: 26 July 2022

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Abstract: This paper considers a nonconvex optimization problem that evolves over time, and addresses the synthesis and analysis of regularized primal-dual gradient methods to track a Karush-Kuhn-Tucker (KKT) trajectory. The proposed regularized primal-dual gradient methods are implemented in a running fashion, in the sense that the underlying optimization problem changes during the iterations of the algorithms. For a problem with twice continuously differentiable cost and constraints, and under a generalization of the Mangasarian-Fromovitz constraint qualification, sufficient conditions are derived for the running algorithm to track a KKT trajectory. Further, asymptotic bounds for the tracking error (as a function of the time-variability of a KKT trajectory) are obtained. A continuous-time version of the algorithm, framed as a system of differential inclusions, is also considered and analytical convergence results are derived. For the continuous-time setting, a set of sufficient conditions for the KKT trajectories not to bifurcate or merge is proposed. Illustrative numerical results inspired by a real-world application are provided.


Full work available at URL: https://arxiv.org/abs/1812.00613




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5093264)