Faster first-order primal-dual methods for linear programming using restarts and sharpness
From MaRDI portal
Publication:6165583
DOI10.1007/S10107-022-01901-9zbMATH Open1522.90024arXiv2105.12715OpenAlexW3165200223MaRDI QIDQ6165583FDOQ6165583
Authors:
Publication date: 1 August 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: First-order primal-dual methods are appealing for their low memory overhead, fast iterations, and effective parallelization. However, they are often slow at finding high accuracy solutions, which creates a barrier to their use in traditional linear programming (LP) applications. This paper exploits the sharpness of primal-dual formulations of LP instances to achieve linear convergence using restarts in a general setting that applies to ADMM (alternating direction method of multipliers), PDHG (primal-dual hybrid gradient method) and EGM (extragradient method). In the special case of PDHG, without restarts we show an iteration count lower bound of , while with restarts we show an iteration count upper bound of , where is a condition number and is the desired accuracy. Moreover, the upper bound is optimal for a wide class of primal-dual methods, and applies to the strictly more general class of sharp primal-dual problems. We develop an adaptive restart scheme and verify that restarts significantly improve the ability of PDHG, EGM, and ADMM to find high accuracy solutions to LP problems.
Full work available at URL: https://arxiv.org/abs/2105.12715
Recommendations
- A first-order primal-dual algorithm with linesearch
- Primal-dual first-order methods for a class of cone programming
- Unified linear convergence of first-order primal-dual algorithms for saddle point problems
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
Cites Work
- An ADMM-based interior-point method for large-scale linear programming
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Conic optimization via operator splitting and homogeneous self-dual embedding
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- A new polynomial-time algorithm for linear programming
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Adaptive restart for accelerated gradient schemes
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- Title not available (Why is that?)
- Gradient methods for minimizing composite functions
- Title not available (Why is that?)
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Title not available (Why is that?)
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Monotone Operators and the Proximal Point Algorithm
- Interior point methods 25 years later
- Applied analysis
- Subgradient methods for huge-scale optimization problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximations to Solutions to Systems of Linear Inequalities
- Error bounds for solutions of linear equations and inequalities
- Incorporating Condition Measures into the Complexity Theory of Linear Programming
- Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications
- Finite termination of the proximal point algorithm
- On linear convergence of iterative methods for the variational inequality problem
- Weak Sharp Minima in Mathematical Programming
- Weak Sharp Solutions of Variational Inequalities
- A Gauss-Newton method for convex composite optimization
- Error bounds and convergence analysis of feasible descent methods: A general approach
- The stepping stone method of explaining linear programming calculations in transportation problems
- Time bounds for selection
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Title not available (Why is that?)
- Linear programming and sequential decisions
- Production scheduling by the transportation method of linear programming
- Linear programming, complexity theory and elementary functional analysis
- Title not available (Why is that?)
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- Subgradient methods for sharp weakly convex functions
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Local linear convergence analysis of primal-dual splitting methods
- Partial smoothness and constant rank
- New characterizations of Hoffman constants for systems of linear constraints
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Sharpness, restart, and acceleration
- The condition number of a function relative to a set
- Restarting algorithms: sometimes there is free lunch
- An Asymptotically Superlinearly Convergent Semismooth Newton Augmented Lagrangian Method for Linear Programming
- On the convergence of stochastic primal-dual hybrid gradient
- The `Idiot' crash quadratic penalty algorithm for linear programming and its application to linearizations of quadratic assignment problems
- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
Cited In (4)
- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
- An easily computable upper bound on the Hoffman constant for homogeneous inequality systems
- WARPd: a linearly convergent first-order primal-dual algorithm for inverse problems with approximate sharpness conditions
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
This page was built for publication: Faster first-order primal-dual methods for linear programming using restarts and sharpness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6165583)