Faster first-order primal-dual methods for linear programming using restarts and sharpness
From MaRDI portal
Publication:6165583
DOI10.1007/s10107-022-01901-9zbMath1522.90024arXiv2105.12715OpenAlexW3165200223MaRDI QIDQ6165583
No author found.
Publication date: 1 August 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.12715
Related Items (2)
Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient ⋮ An easily computable upper bound on the Hoffman constant for homogeneous inequality systems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- Conic optimization via operator splitting and homogeneous self-dual embedding
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Gradient methods for minimizing composite functions
- Subgradient methods for huge-scale optimization problems
- First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
- Interior point methods 25 years later
- A new polynomial-time algorithm for linear programming
- Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications
- Finite termination of the proximal point algorithm
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Time bounds for selection
- Introductory lectures on convex optimization. A basic course.
- Subgradient methods for sharp weakly convex functions
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- On linear convergence of iterative methods for the variational inequality problem
- A Gauss-Newton method for convex composite optimization
- Linear programming, complexity theory and elementary functional analysis
- A first-order primal-dual algorithm for convex problems with applications to imaging
- New characterizations of Hoffman constants for systems of linear constraints
- The condition number of a function relative to a set
- Restarting algorithms: sometimes there is free lunch
- Adaptive restart for accelerated gradient schemes
- Linear Programming and Sequential Decisions
- The Stepping Stone Method of Explaining Linear Programming Calculations in Transportation Problems
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Weak Sharp Minima in Mathematical Programming
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- Monotone Operators and the Proximal Point Algorithm
- Weak Sharp Solutions of Variational Inequalities
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Approximations to Solutions to Systems of Linear Inequalities
- Error bounds for solutions of linear equations and inequalities
- Incorporating Condition Measures into the Complexity Theory of Linear Programming
- An ADMM-based interior-point method for large-scale linear programming
- Partial Smoothness and Constant Rank
- On the Convergence of Stochastic Primal-Dual Hybrid Gradient
- The ‘Idiot’ crash quadratic penalty algorithm for linear programming and its application to linearizations of quadratic assignment problems
- An Asymptotically Superlinearly Convergent Semismooth Newton Augmented Lagrangian Method for Linear Programming
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Sharpness, Restart, and Acceleration
- Adaptive restart of accelerated gradient methods under local quadratic growth condition
- Production Scheduling by the Transportation Method of Linear Programming
- Local linear convergence analysis of Primal–Dual splitting methods
- Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient
This page was built for publication: Faster first-order primal-dual methods for linear programming using restarts and sharpness