On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
From MaRDI portal
Publication:4286937
DOI10.1287/moor.18.4.846zbMath0804.90103OpenAlexW2153745131MaRDI QIDQ4286937
Publication date: 19 January 1995
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1287/moor.18.4.846
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Deterministic network models in operations research (90B10)
Related Items (28)
Further properties of the forward-backward envelope with applications to difference-of-convex programming ⋮ Error bounds for inconsistent linear inequalities and programs ⋮ Error estimates and Lipschitz constants for best approximation in continuous function spaces ⋮ Error bounds in mathematical programming ⋮ Active-Set Identification with Complexity Guarantees of an Almost Cyclic 2-Coordinate Descent Method with Armijo Line Search ⋮ A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure ⋮ A unified approach to error bounds for structured convex optimization problems ⋮ Approximation accuracy, gradient methods, and error bound for structured convex optimization ⋮ Subgradient methods for huge-scale optimization problems ⋮ On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems ⋮ Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems ⋮ A modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimization ⋮ An efficient implementable inexact entropic proximal point algorithm for a class of linear programming problems ⋮ Convergence of the augmented decomposition algorithm ⋮ On the linear convergence of the alternating direction method of multipliers ⋮ A coordinate gradient descent method for nonsmooth separable minimization ⋮ Iteration complexity analysis of block coordinate descent methods ⋮ Convergent Lagrangian heuristics for nonlinear minimum cost network flows ⋮ Nonconvex proximal incremental aggregated gradient method with linear convergence ⋮ Iteration complexity analysis of dual first-order methods for conic convex programming ⋮ Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization ⋮ Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods ⋮ Projection onto a Polyhedron that Exploits Sparsity ⋮ A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization ⋮ Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem ⋮ A sequential updating scheme of the Lagrange multiplier for separable convex programming ⋮ Error bounds and convergence analysis of feasible descent methods: A general approach ⋮ A Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex Optimization
This page was built for publication: On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization