Smooth Optimization with Approximate Gradient
From MaRDI portal
Publication:3395010
DOI10.1137/060676386zbMATH Open1180.90378arXivmath/0512344OpenAlexW2121210949MaRDI QIDQ3395010FDOQ3395010
Authors: Alexandre d'Aspremont
Publication date: 20 August 2009
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.
Full work available at URL: https://arxiv.org/abs/math/0512344
Recommendations
- An optimal gradient method for smooth strongly convex minimization
- First-order methods of smooth convex optimization with inexact oracle
- Optimized first-order methods for smooth convex minimization
- A Stochastic Smoothing Algorithm for Semidefinite Programming
- Smoothing technique and its applications in semidefinite optimization
Numerical mathematical programming methods (65K05) Semidefinite programming (90C22) Abstract computational complexity for mathematical programming problems (90C60)
Cited In (41)
- Composite convex optimization with global and local inexact oracles
- An inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraints
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- First-order methods of smooth convex optimization with inexact oracle
- Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization
- HT-AWGM: a hierarchical Tucker-adaptive wavelet Galerkin method for high-dimensional elliptic problems
- An optimal method for stochastic composite optimization
- Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation
- First-order methods for convex optimization
- Small errors in random zeroth-order optimization are imaginary
- Inexact tensor methods and their application to stochastic convex optimization
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Stopping rules for gradient methods for non-convex problems with additive noise in gradient
- Universal intermediate gradient method for convex problems with inexact oracle
- Inexact first-order primal-dual algorithms
- A frequency-domain analysis of inexact gradient methods
- Finite-sum smooth optimization with SARAH
- Lower Bounds for Parallel and Randomized Convex Optimization
- An acceleration procedure for optimal first-order methods
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Accelerated differential inclusion for convex optimization
- Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- New analysis and results for the Frank-Wolfe method
- Accelerated gradient methods with absolute and relative noise in the gradient
- Unifying framework for accelerated randomized methods in convex optimization
- Automatic alignment for three-dimensional tomographic reconstruction
- Gradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness Parameter
- A dual gradient-projection algorithm for model predictive control in fixed-point arithmetic
- Sequential Subspace Optimization for Quasar-Convex Optimization Problems with Inexact Gradient
- Differentially private inference via noisy optimization
- An introduction to continuous optimization for imaging
- Robust hybrid zero-order optimization algorithms with acceleration via averaging in time
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- A generalization of Löwner-John's ellipsoid theorem
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- An Accelerated Linearized Alternating Direction Method of Multipliers
- Accelerated gradient sliding for structured convex optimization
- Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone
Uses Software
This page was built for publication: Smooth Optimization with Approximate Gradient
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3395010)