Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems
From MaRDI portal
Publication:5093264
Abstract: This paper considers a nonconvex optimization problem that evolves over time, and addresses the synthesis and analysis of regularized primal-dual gradient methods to track a Karush-Kuhn-Tucker (KKT) trajectory. The proposed regularized primal-dual gradient methods are implemented in a running fashion, in the sense that the underlying optimization problem changes during the iterations of the algorithms. For a problem with twice continuously differentiable cost and constraints, and under a generalization of the Mangasarian-Fromovitz constraint qualification, sufficient conditions are derived for the running algorithm to track a KKT trajectory. Further, asymptotic bounds for the tracking error (as a function of the time-variability of a KKT trajectory) are obtained. A continuous-time version of the algorithm, framed as a system of differential inclusions, is also considered and analytical convergence results are derived. For the continuous-time setting, a set of sufficient conditions for the KKT trajectories not to bifurcate or merge is proposed. Illustrative numerical results inspired by a real-world application are provided.
Recommendations
- Primal-dual incremental gradient method for nonsmooth and convex optimization problems
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- Primal-dual extragradient methods for nonlinear nonsmooth PDE-constrained optimization
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- An efficient primal dual prox method for non-smooth optimization
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
- Primal-Dual Interior Methods for Nonconvex Nonlinear Programming
- Online Primal-Dual Methods With Measurement Feedback for Time-Varying Convex Optimization
- A fast continuous time approach with time scaling for nonsmooth convex optimization
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
Cites work
- scientific article; zbMATH DE number 48687 (Why is no real title available?)
- scientific article; zbMATH DE number 1502618 (Why is no real title available?)
- A Contraction Analysis of Primal-Dual Dynamics in Distributed and Time-Varying Implementations
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Accelerated methods for nonconvex optimization
- An Euler-Newton continuation method for tracking solution trajectories of parametric variational inequalities
- BV periodic solutions of an evolution problem associated with continuous moving convex sets
- Convergence Analysis of Saddle Point Problems in Time Varying Wireless Systems— Control Theoretical Approach
- Convex sweeping process in the framework of measure differential inclusions and evolution variational inequalities
- Discrete and Continuous-Time Soft-Thresholding for Dynamic Signal Recovery
- Distributed Continuous-Time Convex Optimization With Time-Varying Cost Functions
- Distributed Continuous-Time Convex Optimization on Weight-Balanced Digraphs
- Evolution equations governed by the sweeping process
- Gradient methods for nonstationary unconstrained optimization problems
- Implicit Functions and Solution Mappings
- Multiuser optimization: distributed algorithms and error analysis
- On LICQ and the uniqueness of Lagrange multipliers
- On the discrete analogues of some generalizations of Gronwall's inequality
- On the equivalence between complementarity systems, projected systems and differential inclusions
- Online Learning With Inexact Proximal Online Gradient Descent Algorithms
- Online Primal-Dual Methods With Measurement Feedback for Time-Varying Convex Optimization
- Prediction-Correction Algorithms for Time-Varying Constrained Optimization
- Real-time nonlinear optimization as a generalized equation
- Second-Order Online Nonconvex Optimization
- Semi-global exponential stability of augmented primal-dual gradient dynamics for constrained convex optimization
- The Role of Convexity in Saddle-Point Dynamics: Lyapunov Function and Robustness
Cited in
(5)- Discrete-time Euler-smoothing methods for time-varying convex constrained optimization
- Nonlinear optimization filters for stochastic time-varying convex optimization
- Primal-dual method for optimization problems with changing constraints
- Bounds for the tracking error of first-order online optimization methods
- Variable metric primal-dual method for convex optimization problems with changing constraints
This page was built for publication: Running Primal-Dual Gradient Method for Time-Varying Nonconvex Problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5093264)