On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
From MaRDI portal
Publication:4286937
DOI10.1287/MOOR.18.4.846zbMATH Open0804.90103OpenAlexW2153745131MaRDI QIDQ4286937FDOQ4286937
Authors: Zhi-Quan Luo, Paul Tseng
Publication date: 19 January 1995
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/moor.18.4.846
Recommendations
- Convergence rates of a dual gradient method for constrained linear ill-posed problems
- Dual Ascent Methods for Problems with Strictly Convex Costs and Linear Constraints: A Unified Approach
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- On Dual Convergence and the Rate of Primal Convergence of Bregman’s Convex Programming Method
- Linear Convergence of Random Dual Coordinate Descent on Nonpolyhedral Convex Problems
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- Linear convergence rate of the generalized alternating direction method of multipliers for a class of convex minimization problems
- On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs
- On convergence of the dual Newton method for a linear semidefinite programming problem
- scientific article; zbMATH DE number 1538165
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Deterministic network models in operations research (90B10)
Cited In (33)
- Active-set identification with complexity guarantees of an almost cyclic 2-coordinate descent method with Armijo line search
- A global dual error bound and its application to the analysis of linearly constrained nonconvex optimization
- On the Convergence Time of Dual Subgradient Methods for Strongly Convex Programs
- Error estimates and Lipschitz constants for best approximation in continuous function spaces
- Subgradient methods for huge-scale optimization problems
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Convergence of the augmented decomposition algorithm
- Graph-structured tensor optimization for nonlinear density control and mean field games
- Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
- Projection onto a polyhedron that exploits sparsity
- Error bounds, quadratic growth, and linear convergence of proximal methods
- Error bounds in mathematical programming
- Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems
- On convex optimization with linear constraints
- Further properties of the forward-backward envelope with applications to difference-of-convex programming
- Error bounds for inconsistent linear inequalities and programs
- Iteration complexity analysis of dual first-order methods for conic convex programming
- On the linear convergence of the alternating direction method of multipliers
- A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure
- Rate of convergence analysis of dual-based variables decomposition methods for strongly convex problems
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Nonconvex proximal incremental aggregated gradient method with linear convergence
- A unified approach to error bounds for structured convex optimization problems
- On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
- A block successive upper-bound minimization method of multipliers for linearly constrained convex optimization
- Iteration complexity analysis of block coordinate descent methods
- Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem
- A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization
- A sequential updating scheme of the Lagrange multiplier for separable convex programming
- Dual coordinate ascent methods for non-strictly convex minimization
- Convergent Lagrangian heuristics for nonlinear minimum cost network flows
- A coordinate gradient descent method for nonsmooth separable minimization
- An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems
This page was built for publication: On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4286937)