Lower bounds for non-convex stochastic optimization
DOI10.1007/S10107-022-01822-7zbMATH Open1517.90087arXiv1912.02365OpenAlexW2993258424MaRDI QIDQ6038643FDOQ6038643
Nathan Srebro, John C. Duchi, Yossi Arjevani, Blake Woodworth, Yair Carmon, Dylan J. Foster
Publication date: 2 May 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.02365
Recommendations
Large-scale problems in mathematical programming (90C06) Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convergence of estimates under dimensionality restrictions
- Title not available (Why is that?)
- Some NP-complete problems in quadratic and nonlinear programming
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Cubic regularization of Newton method and its global performance
- On parallel complexity of nonsmooth convex optimization
- Title not available (Why is that?)
- Complexity bounds for second-order optimality in unconstrained optimization
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Information-Based Complexity, Feedback and Dynamics in Convex Programming
- Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory
- Black-Box Complexity of Local Minimization
- WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION
- Optimization Methods for Large-Scale Machine Learning
- A geometric analysis of phase retrieval
- Lower Bounds for Parallel and Randomized Convex Optimization
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
Cited In (8)
- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- Title not available (Why is that?)
- Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization
- High probability bounds on AdaGrad for constrained weakly convex optimization
- Title not available (Why is that?)
- Convergence rates for stochastic approximation: biased noise with unbounded variance, and applications
- Title not available (Why is that?)
- The stochastic lower bound
This page was built for publication: Lower bounds for non-convex stochastic optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6038643)