A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions
From MaRDI portal
Publication:4646445
DOI10.1137/18M1168480zbMath1411.90265arXiv1508.04625WikidataQ128616281 ScholiaQ128616281MaRDI QIDQ4646445
Pascal Bianchi, Olivier Fercoq
Publication date: 14 January 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1508.04625
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Discrete approximations in optimal control (49M25)
Related Items (16)
Convergence properties of a randomized primal-dual algorithm with applications to parallel MRI ⋮ Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems ⋮ Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis ⋮ An accelerated coordinate gradient descent algorithm for non-separable composite optimization ⋮ On the Convergence of Stochastic Primal-Dual Hybrid Gradient ⋮ Cyclic Coordinate-Update Algorithms for Fixed-Point Problems: Analysis and Applications ⋮ Cyclic Coordinate Dual Averaging with Extrapolation ⋮ A new large-scale learning algorithm for generalized additive models ⋮ Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient ⋮ Acceleration of the PDHGM on partially strongly convex functions ⋮ An alternative extrapolation scheme of PDHGM for saddle point problem with nonlinear function ⋮ Primal-dual block-proximal splitting for a class of non-convex problems ⋮ Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization ⋮ A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization ⋮ Distributed composite optimization for multi-agent systems with asynchrony ⋮ A generic coordinate descent solver for non-smooth convex optimisation
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Parallel coordinate descent methods for big data optimization
- On the ergodic convergence rates of a first-order primal-dual algorithm
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- Subgradient methods for huge-scale optimization problems
- A random coordinate descent algorithm for optimization problems with composite objective function and linear coupled constraints
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Iteration complexity analysis of block coordinate descent methods
- Incremental proximal methods for large scale convex optimization
- A three-operator splitting scheme and its optimization applications
- A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training
- A coordinate gradient descent method for nonsmooth separable minimization
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- On the convergence of the coordinate descent method for convex differentiable minimization
- First-order algorithms for convex optimization with nonseparable objective and coupled constraints
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- Randomized primal-dual proximal block coordinate updates
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Pathwise coordinate optimization
- A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- A Coordinate Descent Primal-Dual Algorithm and Application to Distributed Asynchronous Optimization
- Accelerated, Parallel, and Proximal Coordinate Descent
- Algorithm 778: L-BFGS-B
- Bregman Monotone Optimization Algorithms
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design
- An Inexact Proximal Path-Following Algorithm for Constrained Convex Minimization
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Minimizing Certain Convex Functions
- Faster Convergence Rates of Relaxed Peaceman-Rachford and ADMM Under Regularity Assumptions
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- On the Convergence of Block Coordinate Descent Type Methods
- Stochastic Quasi-Fejér Block-Coordinate Fixed Point Iterations with Random Sweeping
- Convergence of a block coordinate descent method for nondifferentiable minimization
This page was built for publication: A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions