On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization

From MaRDI portal
Revision as of 18:38, 6 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4286937

DOI10.1287/moor.18.4.846zbMath0804.90103OpenAlexW2153745131MaRDI QIDQ4286937

Zhi-Quan Luo, Paul Tseng

Publication date: 19 January 1995

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1287/moor.18.4.846




Related Items (28)

Further properties of the forward-backward envelope with applications to difference-of-convex programmingError bounds for inconsistent linear inequalities and programsError estimates and Lipschitz constants for best approximation in continuous function spacesError bounds in mathematical programmingActive-Set Identification with Complexity Guarantees of an Almost Cyclic 2-Coordinate Descent Method with Armijo Line SearchA First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity StructureA unified approach to error bounds for structured convex optimization problemsApproximation accuracy, gradient methods, and error bound for structured convex optimizationSubgradient methods for huge-scale optimization problemsOn linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problemsLinear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization ProblemsA modified self-adaptive dual ascent method with relaxed stepsize condition for linearly constrained quadratic convex optimizationAn efficient implementable inexact entropic proximal point algorithm for a class of linear programming problemsConvergence of the augmented decomposition algorithmOn the linear convergence of the alternating direction method of multipliersA coordinate gradient descent method for nonsmooth separable minimizationIteration complexity analysis of block coordinate descent methodsConvergent Lagrangian heuristics for nonlinear minimum cost network flowsNonconvex proximal incremental aggregated gradient method with linear convergenceIteration complexity analysis of dual first-order methods for conic convex programmingBlock-coordinate gradient descent method for linearly constrained nonsmooth separable optimizationError Bounds, Quadratic Growth, and Linear Convergence of Proximal MethodsProjection onto a Polyhedron that Exploits SparsityA Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex OptimizationAccelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problemA sequential updating scheme of the Lagrange multiplier for separable convex programmingError bounds and convergence analysis of feasible descent methods: A general approachA Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex Optimization




This page was built for publication: On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization