A smooth primal-dual optimization framework for nonsmooth composite convex minimization
From MaRDI portal
Publication:4600841
Abstract: We propose a new first-order primal-dual optimization framework for a convex optimization template with broad applications. Our optimization algorithms feature optimal convergence guarantees under a variety of common structure assumptions on the problem template. Our analysis relies on a novel combination of three classic ideas applied to the primal-dual gap function: smoothing, acceleration, and homotopy. The algorithms due to the new approach achieve the best known convergence rate results, in particular when the template consists of only non-smooth functions. We also outline a restart strategy for the acceleration to significantly enhance the practical performance. We demonstrate relations with the augmented Lagrangian method and show how to exploit the strongly convex objectives with rigorous convergence rate guarantees. We provide numerical evidence with two examples and illustrate that the new methods can outperform the state-of-the-art, including Chambolle-Pock, and the alternating direction method-of-multipliers algorithms.
Recommendations
- An adaptive primal-dual framework for nonsmooth convex minimization
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- Smoothing and first order methods: a unified framework
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms
- Excessive Gap Technique in Nonsmooth Convex Minimization
Cites work
- scientific article; zbMATH DE number 3914081 (Why is no real title available?)
- scientific article; zbMATH DE number 51132 (Why is no real title available?)
- scientific article; zbMATH DE number 3511879 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Variable Metric Extension of the Forward–Backward–Forward Algorithm for Monotone Operators
- A fast dual proximal gradient algorithm for convex minimization and applications
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A first-order primal-dual algorithm with linesearch
- A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A proximal-based deomposition method for compositions method for convex minimization problems
- A variable smoothing algorithm for solving convex optimization problems
- Adaptive restart for accelerated gradient schemes
- An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems
- An accelerated linearized alternating direction method of multipliers
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Composite self-concordant minimization
- Computational complexity of inexact gradient augmented Lagrangian methods: application to constrained MPC
- Convergence Rate Analysis of Primal-Dual Splitting Schemes
- Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective
- Convergence rate analysis of the forward-Douglas-Rachford splitting scheme
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Introductory lectures on convex optimization. A basic course.
- Iteration complexity analysis of dual first-order methods for conic convex programming
- Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- Iteration-complexity of first-order penalty methods for convex programming
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers
- On the Global Linear Convergence of the ADMM with MultiBlock Variables
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
- On the global and linear convergence of the generalized alternating direction method of multipliers
- On the sublinear convergence rate of multi-block ADMM
- Optimal primal-dual methods for a class of saddle point problems
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Proximal splitting methods in signal processing
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Smooth minimization of non-smooth functions
- Smoothing alternating direction methods for fully nonsmooth constrained convex optimization
- Smoothing and first order methods: a unified framework
- Solving Multiple-Block Separable Convex Minimization Problems Using Two-Block Alternating Direction Method of Multipliers
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- The convex geometry of linear inverse problems
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Variational Analysis
Cited in
(29)- A primal-dual smoothing framework for max-structured non-convex optimization
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- New primal-dual algorithms for a class of nonsmooth and nonlinear convex-concave minimax problems
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- A generic coordinate descent solver for non-smooth convex optimisation
- Accelerated primal-dual methods with adaptive parameters for composite convex optimization with linear constraints
- First-order methods for convex optimization
- A primal-dual flow for affine constrained convex optimization
- A coordinate-descent primal-dual algorithm with large step size and possibly nonseparable functions
- Variable smoothing for convex optimization problems using stochastic gradients
- A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
- Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates
- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
- A new homotopy proximal variable-metric framework for composite convex minimization
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- Accelerated dual-averaging primal–dual method for composite convex minimization
- An adaptive primal-dual framework for nonsmooth convex minimization
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- First-order primal-dual methods for nonsmooth non-convex optimization
- Variable smoothing for weakly convex composite functions
- The operator splitting schemes revisited: primal-dual gap and degeneracy reduction by a unified analysis
- Random minibatch subgradient algorithms for convex problems with functional constraints
- On the convergence of stochastic primal-dual hybrid gradient
- An efficient primal dual prox method for non-smooth optimization
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- A dual approach for optimal algorithms in distributed optimization over networks
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
This page was built for publication: A smooth primal-dual optimization framework for nonsmooth composite convex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4600841)