Asymptotic analysis via stochastic differential equations of gradient descent algorithms in statistical and computational paradigms
From MaRDI portal
Publication:5148992
Recommendations
- Stochastic gradient descent in continuous time: a central limit theorem
- Asymptotic bias of stochastic gradient search
- Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema
- Stochastic modified equations and dynamics of stochastic gradient algorithms. I: Mathematical foundations
- Stochastic gradient descent with noise of machine learning type. I: Discrete time analysis
Cites work
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 3780265 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 51427 (Why is no real title available?)
- scientific article; zbMATH DE number 54145 (Why is no real title available?)
- scientific article; zbMATH DE number 1354815 (Why is no real title available?)
- scientific article; zbMATH DE number 481040 (Why is no real title available?)
- scientific article; zbMATH DE number 1057566 (Why is no real title available?)
- scientific article; zbMATH DE number 1076783 (Why is no real title available?)
- scientific article; zbMATH DE number 1972910 (Why is no real title available?)
- scientific article; zbMATH DE number 6860839 (Why is no real title available?)
- scientific article; zbMATH DE number 1834045 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 819734 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- A variational perspective on accelerated methods in optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Acceleration of Stochastic Approximation by Averaging
- An approximation of partial sums of independent RV'-s, and the sample DF. I
- An approximation of partial sums of independent RV's, and the sample DF. II
- Approximation for bootstrapped empirical processes
- Asymptotic and finite-sample properties of estimators based on stochastic gradients
- Bootstrapping empirical functions
- Cube root asymptotics
- Deep learning
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Ergodicity for Infinite Dimensional Systems
- Introductory lectures on convex optimization. A basic course.
- Lectures on white noise functionals.
- Nonlinear optimization.
- Numerical Methods for Ordinary Differential Equations
- Robust Stochastic Approximation Approach to Stochastic Programming
- Scalable estimation strategies based on stochastic approximations: classical results and new insights
- Statistical inference for model parameters in stochastic gradient descent
- Stochastic gradient descent in continuous time
- Stochastic methods. A handbook for the natural and social sciences
- Strong approximation for multivariate empirical and related processes, via KMT constructions
- Strong approximation for set-indexed partial sum processes via KMT constructions. I
- Strong approximation for set-indexed partial-sum processes, via KMT constructions. II
- The exit problem for small random perturbations of dynamical systems with a hyperbolic fixed point
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient
- Weak convergence and empirical processes. With applications to statistics
Cited in
(5)- Gradient procedures for stochastic approximation with dependent noise and their asymptotic behaviour
- Analysis of stochastic gradient descent in continuous time
- Central limit theorems for stochastic gradient descent with averaging for stable manifolds
- Discrete-time simulated annealing: a convergence analysis via the Eyring-Kramers law
- Asymptotic bias of stochastic gradient search
This page was built for publication: Asymptotic analysis via stochastic differential equations of gradient descent algorithms in statistical and computational paradigms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5148992)