Smooth Optimization with Approximate Gradient
From MaRDI portal
Publication:3395010
Abstract: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.
Recommendations
- An optimal gradient method for smooth strongly convex minimization
- First-order methods of smooth convex optimization with inexact oracle
- Optimized first-order methods for smooth convex minimization
- A Stochastic Smoothing Algorithm for Semidefinite Programming
- Smoothing technique and its applications in semidefinite optimization
Cited in
(41)- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
- An accelerated linearized alternating direction method of multipliers
- An acceleration procedure for optimal first-order methods
- Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- An optimal method for stochastic composite optimization
- A generalization of Löwner-John's ellipsoid theorem
- Universal intermediate gradient method for convex problems with inexact oracle
- Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter
- Exact worst-case performance of first-order methods for composite convex optimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- New analysis and results for the Frank-Wolfe method
- A dual gradient-projection algorithm for model predictive control in fixed-point arithmetic
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Inexact tensor methods and their application to stochastic convex optimization
- Accelerated differential inclusion for convex optimization
- Composite convex optimization with global and local inexact oracles
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Lower bounds for parallel and randomized convex optimization
- An introduction to continuous optimization for imaging
- Accelerated gradient sliding for structured convex optimization
- Dynamic smoothness parameter for fast gradient methods
- Accelerated gradient methods with absolute and relative noise in the gradient
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
- Differentially private inference via noisy optimization
- Sequential subspace optimization for quasar-convex optimization problems with inexact gradient
- Automatic alignment for three-dimensional tomographic reconstruction
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Robust accelerated gradient methods for smooth strongly convex functions
- First-order methods of smooth convex optimization with inexact oracle
- Unifying framework for accelerated randomized methods in convex optimization
- First-order methods for convex optimization
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Robust hybrid zero-order optimization algorithms with acceleration via averaging in time
- HT-AWGM: a hierarchical Tucker-adaptive wavelet Galerkin method for high-dimensional elliptic problems
- Computing the best approximation over the intersection of a polyhedral set and the doubly nonnegative cone
- Small errors in random zeroth-order optimization are imaginary
- Inexact first-order primal-dual algorithms
- A frequency-domain analysis of inexact gradient methods
- Finite-sum smooth optimization with SARAH
This page was built for publication: Smooth Optimization with Approximate Gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3395010)