Understanding generalization error of SGD in nonconvex optimization
From MaRDI portal
Publication:2127232
Cites work
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- scientific article; zbMATH DE number 3371284 (Why is no real title available?)
- 10.1162/153244302760200704
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A theory of the learnable
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convex analysis and monotone operator theory in Hilbert spaces
- Large-scale machine learning with stochastic gradient descent
- Learnability, stability and uniform convergence
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- On complexity of stochastic programming problems
- Optimal rates for multi-pass stochastic gradient methods
- Rapid, robust, and reliable blind deconvolution via nonconvex optimization
- Robust Large Margin Deep Neural Networks
- Robust Stochastic Approximation Approach to Stochastic Programming
- Robustness and generalization
- Stability of randomized learning algorithms
- Understanding machine learning. From theory to algorithms
Cited in
(1)
This page was built for publication: Understanding generalization error of SGD in nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2127232)