How to trap a gradient flow
From MaRDI portal
Publication:6573775
DOI10.1137/21M1397854MaRDI QIDQ6573775FDOQ6573775
Authors: Sébastien Bubeck, Dan Mikulincer
Publication date: 17 July 2024
Published in: SIAM Journal on Computing (Search for Journal in Brave)
Recommendations
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
- Lower bounds for non-convex stochastic optimization
- Gradient descent finds the cubic-regularized nonconvex Newton step
- Gradient Transformation Trajectory Following Algorithms for Determining Stationary Min-Max Saddle Points
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Convex optimization: algorithms and complexity
- On the quantum query complexity of local search in two and three dimensions
- On parallel complexity of nonsmooth convex optimization
- Nearest-neighbor walks with low predictability profile and percolation in \(2+\varepsilon\) dimensions
- Unpredictable paths and percolation
- New upper and lower bounds for randomized and quantum local search
- Black-Box Complexity of Local Minimization
- Lower Bounds for Local Search by Quantum Arguments
- Lower bounds for finding stationary points I
This page was built for publication: How to trap a gradient flow
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6573775)