How to trap a gradient flow
From MaRDI portal
Publication:6573775
Recommendations
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
- Lower bounds for non-convex stochastic optimization
- Gradient descent finds the cubic-regularized nonconvex Newton step
- Gradient Transformation Trajectory Following Algorithms for Determining Stationary Min-Max Saddle Points
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- Black-Box Complexity of Local Minimization
- Convex optimization: algorithms and complexity
- Introductory lectures on convex optimization. A basic course.
- Lower Bounds for Local Search by Quantum Arguments
- Lower bounds for finding stationary points I
- Nearest-neighbor walks with low predictability profile and percolation in \(2+\varepsilon\) dimensions
- New upper and lower bounds for randomized and quantum local search
- On parallel complexity of nonsmooth convex optimization
- On the quantum query complexity of local search in two and three dimensions
- Unpredictable paths and percolation
This page was built for publication: How to trap a gradient flow
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6573775)