scientific article; zbMATH DE number 7306906
From MaRDI portal
Léon Bottou, Xiaoxia Wu, Rachel Ward
Publication date: 5 February 2021
Full work available at URL: https://arxiv.org/abs/1806.01811
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
convergencelarge-scale optimizationnonconvex optimizationadaptive gradient descentstochastic offline learning
Related Items
Machine learning design of volume of fluid schemes for compressible flows, Random Batch Methods for Classical and Quantum Interacting Particle Systems and Statistical Samplings, Stochastic momentum methods for non-convex learning without bounded assumptions, SVRG meets AdaGrad: painless variance reduction, Stochastic Gauss-Newton algorithms for online PCA, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, Adaptive step size rules for stochastic optimization in large-scale learning, An adaptive Riemannian gradient method without function evaluations, Recent Theoretical Advances in Non-Convex Optimization, Incremental without replacement sampling in nonconvex optimization, An adaptive Polyak heavy-ball method
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Robust Stochastic Approximation Approach to Stochastic Programming
- Two-Point Step Size Gradient Methods
- Accelerated Methods for NonConvex Optimization
- Optimization Methods for Large-Scale Machine Learning
- Finding approximate local minima faster than gradient descent
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method