Understanding generalization error of SGD in nonconvex optimization
From MaRDI portal
Publication:2127232
DOI10.1007/S10994-021-06056-WOpenAlexW3203457432MaRDI QIDQ2127232FDOQ2127232
Publication date: 20 April 2022
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-021-06056-w
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Convex analysis and monotone operator theory in Hilbert spaces
- 10.1162/153244302760200704
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- Understanding Machine Learning
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Title not available (Why is that?)
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Title not available (Why is that?)
- A theory of the learnable
- Title not available (Why is that?)
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Learnability, stability and uniform convergence
- Rapid, robust, and reliable blind deconvolution via nonconvex optimization
- Robustness and generalization
- Title not available (Why is that?)
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Robust Large Margin Deep Neural Networks
Cited In (1)
Uses Software
This page was built for publication: Understanding generalization error of SGD in nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2127232)