Lower bounds for non-convex stochastic optimization
Publication:6038643
DOI10.1007/S10107-022-01822-7zbMath1517.90087arXiv1912.02365OpenAlexW2993258424MaRDI QIDQ6038643
Nathan Srebro, John C. Duchi, Yossi Arjevani, Blake Woodworth, Yair Carmon, Dylan J. Foster
Publication date: 2 May 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.02365
Analysis of algorithms and problem complexity (68Q25) Large-scale problems in mathematical programming (90C06) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15)
Related Items (4)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Complexity bounds for second-order optimality in unconstrained optimization
- On parallel complexity of nonsmooth convex optimization
- Introductory lectures on convex optimization. A basic course.
- A geometric analysis of phase retrieval
- Implicit regularization in nonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
- Cubic regularization of Newton method and its global performance
- Convergence of estimates under dimensionality restrictions
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Some NP-complete problems in quadratic and nonlinear programming
- Optimization Methods for Large-Scale Machine Learning
- Black-Box Complexity of Local Minimization
- Lower Bounds for Parallel and Randomized Convex Optimization
- WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Information-Based Complexity, Feedback and Dynamics in Convex Programming
- Lower Bounds on the Oracle Complexity of Nonsmooth Convex Optimization via Information Theory
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
This page was built for publication: Lower bounds for non-convex stochastic optimization