Faster first-order primal-dual methods for linear programming using restarts and sharpness
From MaRDI portal
Publication:6165583
Abstract: First-order primal-dual methods are appealing for their low memory overhead, fast iterations, and effective parallelization. However, they are often slow at finding high accuracy solutions, which creates a barrier to their use in traditional linear programming (LP) applications. This paper exploits the sharpness of primal-dual formulations of LP instances to achieve linear convergence using restarts in a general setting that applies to ADMM (alternating direction method of multipliers), PDHG (primal-dual hybrid gradient method) and EGM (extragradient method). In the special case of PDHG, without restarts we show an iteration count lower bound of , while with restarts we show an iteration count upper bound of , where is a condition number and is the desired accuracy. Moreover, the upper bound is optimal for a wide class of primal-dual methods, and applies to the strictly more general class of sharp primal-dual problems. We develop an adaptive restart scheme and verify that restarts significantly improve the ability of PDHG, EGM, and ADMM to find high accuracy solutions to LP problems.
Recommendations
- A first-order primal-dual algorithm with linesearch
- Primal-dual first-order methods for a class of cone programming
- Unified linear convergence of first-order primal-dual algorithms for saddle point problems
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 4164577 (Why is no real title available?)
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 1240224 (Why is no real title available?)
- scientific article; zbMATH DE number 977420 (Why is no real title available?)
- scientific article; zbMATH DE number 1489799 (Why is no real title available?)
- scientific article; zbMATH DE number 1795730 (Why is no real title available?)
- A Gauss-Newton method for convex composite optimization
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A new polynomial-time algorithm for linear programming
- Adaptive restart for accelerated gradient schemes
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- An ADMM-based interior-point method for large-scale linear programming
- An Asymptotically Superlinearly Convergent Semismooth Newton Augmented Lagrangian Method for Linear Programming
- Applied analysis
- Approximations to Solutions to Systems of Linear Inequalities
- Conic optimization via operator splitting and homogeneous self-dual embedding
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Error bounds for solutions of linear equations and inequalities
- Finite termination of the proximal point algorithm
- Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Gradient methods for minimizing composite functions
- Incorporating Condition Measures into the Complexity Theory of Linear Programming
- Interior point methods 25 years later
- Introductory lectures on convex optimization. A basic course.
- Linear programming and sequential decisions
- Linear programming, complexity theory and elementary functional analysis
- Local linear convergence analysis of primal-dual splitting methods
- Monotone Operators and the Proximal Point Algorithm
- New characterizations of Hoffman constants for systems of linear constraints
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- On linear convergence of iterative methods for the variational inequality problem
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the convergence of stochastic primal-dual hybrid gradient
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Partial smoothness and constant rank
- Production scheduling by the transportation method of linear programming
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Restarting algorithms: sometimes there is free lunch
- Sharpness, restart, and acceleration
- Smooth minimization of non-smooth functions
- Subgradient methods for huge-scale optimization problems
- Subgradient methods for sharp weakly convex functions
- The `Idiot' crash quadratic penalty algorithm for linear programming and its application to linearizations of quadratic assignment problems
- The condition number of a function relative to a set
- The stepping stone method of explaining linear programming calculations in transportation problems
- Time bounds for selection
- Weak Sharp Minima in Mathematical Programming
- Weak Sharp Solutions of Variational Inequalities
Cited in
(4)- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
- An easily computable upper bound on the Hoffman constant for homogeneous inequality systems
- WARPd: a linearly convergent first-order primal-dual algorithm for inverse problems with approximate sharpness conditions
- Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
This page was built for publication: Faster first-order primal-dual methods for linear programming using restarts and sharpness
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6165583)