Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
From MaRDI portal
Publication:5085262
DOI10.1080/10556788.2020.1731747zbMATH Open1489.90124arXiv1809.05895OpenAlexW3009707357MaRDI QIDQ5085262FDOQ5085262
Authors: Sergey Guminov, Pavel Dvurechensky, Yuri Nesterov, Alexander V. Gasnikov
Publication date: 27 June 2022
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Abstract: In this paper, a new variant of accelerated gradient descent is proposed. The pro-posed method does not require any information about the objective function, usesexact line search for the practical accelerations of convergence, converges accordingto the well-known lower bounds for both convex and non-convex objective functions,possesses primal-dual properties and can be applied in the non-euclidian set-up. Asfar as we know this is the rst such method possessing all of the above properties atthe same time. We also present a universal version of the method which is applicableto non-smooth problems. We demonstrate how in practice one can efficiently use thecombination of line-search and primal-duality by considering a convex optimizationproblem with a simple structure (for example, linearly constrained).
Full work available at URL: https://arxiv.org/abs/1809.05895
Recommendations
- Accelerated primal-dual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems
- Gradient methods for minimizing composite functions
- Accelerated gradient descent methods with line search
- Universal gradient methods for convex optimization problems
- A first-order primal-dual algorithm with linesearch
Cites Work
- New limited memory bundle method for large-scale nonsmooth optimization
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Gradient methods for minimizing composite functions
- Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent
- Title not available (Why is that?)
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Universal gradient methods for convex optimization problems
- Efficient first-order methods for convex minimization: a constructive approach
- Information-based complexity of linear operator equations
- A fast dual proximal gradient algorithm for convex minimization and applications
- Fast Primal-Dual Gradient Method for Strongly Convex Minimization Problems with Linear Constraints
- Generalizing the Optimized Gradient Method for Smooth Convex Minimization
- Generalized uniformly optimal methods for nonlinear programming
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- A stable alternative to Sinkhorn's algorithm for regularized optimal transport
- A universal modification of the linear coupling method
Cited In (15)
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- Inexact model: a framework for optimization and variational inequalities
- Multistage transportation model and sufficient conditions for its potentiality
- Smooth monotone stochastic variational inequalities and saddle point problems: a survey
- Improved exploitation of higher order smoothness in derivative-free optimization
- Accelerated methods for weakly-quasi-convex optimization problems
- Complexity-optimal and parameter-free first-order methods for finding stationary points of composite optimization problems
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Stochastic saddle-point optimization for the Wasserstein barycenter problem
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Geodesic convexity of the symmetric eigenvalue problem and convergence of steepest descent
- Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal–dual tensor methods
- Network Utility Maximization by Updating Individual Transmission Rates
- Alternating minimization methods for strongly convex optimization
- Recent Theoretical Advances in Non-Convex Optimization
Uses Software
This page was built for publication: Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5085262)