A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
From MaRDI portal
Publication:2128772
DOI10.1007/S11590-021-01775-4zbMATH Open1491.90182arXiv2104.01314OpenAlexW3178689845MaRDI QIDQ2128772FDOQ2128772
Authors: Quoc Tran Dinh
Publication date: 22 April 2022
Published in: Optimization Letters (Search for Journal in Brave)
Abstract: In this paper, we develop a unified convergence analysis framework for the Accelerated Smoothed GAp ReDuction algorithm (ASGARD) introduced in [20, Tran-Dinh et al, 2015] Unlike[20], the new analysis covers three settings in a single algorithm: general convexity, strong convexity, and strong convexity and smoothness. Moreover, we establish the convergence guarantees on three criteria: (i) gap function, (ii) primal objective residual, and (iii) dual objective residual. Our convergence rates are optimal (up to a constant factor) in all cases. While the convergence rate on the primal objective residual for the general convex case has been established in [20], we prove additional convergence rates on the gap function and the dual objective residual. The analysis for the last two cases is completely new. Our results provide a complete picture on the convergence guarantees of ASGARD. Finally, we present four different numerical experiments on a representative optimization model to verify our algorithm and compare it with Nesterov's smoothing technique.
Full work available at URL: https://arxiv.org/abs/2104.01314
Recommendations
- Generalized Nesterov's accelerated proximal gradient algorithms with convergence rate of order \(o(1/k^2)\)
- An adaptive primal-dual framework for nonsmooth convex minimization
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- On the convergence analysis of the optimized gradient method
- On the convergence of stochastic primal-dual hybrid gradient
primal-dual algorithmNesterov's smoothing techniqueaccelerated smoothed gap reductionconvex-concave saddle-point problem
Cites Work
- Disciplined convex programming
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Regularization and Variable Selection Via the Elastic Net
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- Gradient methods for minimizing composite functions
- Title not available (Why is that?)
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science
- Combining Lagrangian decomposition and excessive gap smoothing technique for solving large-scale separable convex optimization problems
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A Variable Metric Extension of the Forward–Backward–Forward Algorithm for Monotone Operators
- Convex analysis and monotone operator theory in Hilbert spaces
- Primal-dual decomposition by operator splitting and applications to image deblurring
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Optimal primal-dual methods for a class of saddle point problems
- A three-operator splitting scheme and its optimization applications
- An adaptive primal-dual framework for nonsmooth convex minimization
- Convergence Rate Analysis of Primal-Dual Splitting Schemes
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Variable smoothing for convex optimization problems using stochastic gradients
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Inertial, Corrected, Primal-Dual Proximal Splitting
- Faster Lagrangian-Based Methods in Convex Optimization
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
Cited In (5)
- Acceleration of convergence of a two-level algebraic algorithm by aggregation in smoothing process
- A primal-dual flow for affine constrained convex optimization
- Improved convergence bounds for smoothed aggregation method: Linear dependence of the convergence rate on the number of levels
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- Fast convergence of the primal-dual dynamical system and corresponding algorithms for a nonsmooth bilinearly coupled saddle point problem
This page was built for publication: A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2128772)