An adaptive primal-dual framework for nonsmooth convex minimization
From MaRDI portal
Publication:2220901
Abstract: We propose a new self-adaptive, double-loop smoothing algorithm to solve composite, nonsmooth, and constrained convex optimization problems. Our algorithm is based on Nesterov's smoothing technique via general Bregman distance functions. It self-adaptively selects the number of iterations in the inner loop to achieve a desired complexity bound without requiring the accuracy a priori as in variants of Augmented Lagrangian methods (ALM). We prove -convergence rate on the last iterate of the outer sequence for both unconstrained and constrained settings in contrast to ergodic rates which are common in ALM as well as alternating direction method-of-multipliers literature. Compared to existing inexact ALM or quadratic penalty methods, our analysis does not rely on the worst-case bounds of the subproblem solved by the inner loop. Therefore, our algorithm can be viewed as a restarting technique applied to the ASGARD method in cite{TranDinh2015b} but with rigorous theoretical guarantees or as an inexact ALM with explicit inner loop termination rules and adaptive parameters. Our algorithm only requires to initialize the parameters once, and automatically update them during the iteration process without tuning. We illustrate the superiority of our methods via several examples as compared to the state-of-the-art.
Recommendations
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- New primal-dual algorithms for a class of nonsmooth and nonlinear convex-concave minimax problems
- Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization
- Smoothing alternating direction methods for fully nonsmooth constrained convex optimization
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 3574917 (Why is no real title available?)
- scientific article; zbMATH DE number 6765491 (Why is no real title available?)
- scientific article; zbMATH DE number 2243362 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A note on the convergence of ADMM for linearly constrained convex optimization problems
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- A variable smoothing algorithm for solving convex optimization problems
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- Adaptive restart for accelerated gradient schemes
- An accelerated linearized alternating direction method of multipliers
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Atomic decomposition by basis pursuit
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming
- Compressed sensing
- Computational complexity of inexact gradient augmented Lagrangian methods: application to constrained MPC
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Convergence rate analysis of the forward-Douglas-Rachford splitting scheme
- Convex analysis and monotone operator theory in Hilbert spaces
- Convex functions. Constructions, characterizations and counterexamples
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Double smoothing technique for large-scale linearly constrained convex optimization
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions
- First-order algorithms for convex optimization with nonseparable objective and coupled constraints
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- Iteration-complexity of first-order penalty methods for convex programming
- Multiplier and gradient methods
- Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Proximal Minimization Methods with Generalized Bregman Functions
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- Proximal splitting methods in signal processing
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Smooth minimization of non-smooth functions
- Smoothing and first order methods: a unified framework
- Sparse and stable Markowitz portfolios
Cited in
(23)- A primal-dual smoothing framework for max-structured non-convex optimization
- Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- An inexact proximal path-following algorithm for constrained convex minimization
- New primal-dual algorithms for a class of nonsmooth and nonlinear convex-concave minimax problems
- A smooth primal-dual optimization framework for nonsmooth composite convex minimization
- Forward-reflected-backward method with variance reduction
- Adaptive Catalyst for Smooth Convex Optimization
- First-order methods for convex optimization
- Adaptive regularization minimization algorithms with nonsmooth norms
- A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- Adaptive FISTA for Nonconvex Optimization
- Cyclic Coordinate Dual Averaging with Extrapolation
- On the convergence of stochastic primal-dual hybrid gradient
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
- Smoothing alternating direction methods for fully nonsmooth constrained convex optimization
- Optimal ecological transition path of a credit portfolio distribution, based on multidate Monge-Kantorovich formulation
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- An adaptive competitive penalty method for nonsmooth constrained optimization
This page was built for publication: An adaptive primal-dual framework for nonsmooth convex minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2220901)