Stochastic heavy ball
DOI10.1214/18-EJS1395zbMATH Open1392.62244arXiv1609.04228OpenAlexW2520472982MaRDI QIDQ1697485FDOQ1697485
Authors: Sébastien Gadat, Fabien Panloup, Sofiane Saadane
Publication date: 20 February 2018
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1609.04228
Recommendations
- Stochastic heavy-ball method for constrained stochastic optimization problems
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- scientific article; zbMATH DE number 165426
- Non-monotone Behavior of the Heavy Ball Method
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
Applications of Brownian motions and diffusion theory (population genetics, absorption problems, etc.) (60J70) Central limit and other weak theorems (60F05) Stochastic approximation (62L20) Estimates of eigenvalues in context of PDEs (35P15) Hypoelliptic equations (35H10)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Acceleration of Stochastic Approximation by Averaging
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Concentration inequalities. A nonasymptotic theory of independence
- Stochastic approximation. A dynamical systems viewpoint.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Multidimensional diffusion processes.
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stability of Markovian processes III: Foster–Lyapunov criteria for continuous-time processes
- Hypocoercivity
- Title not available (Why is that?)
- An optimal method for stochastic composite optimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Local minima and convergence in low-rank semidefinite programming
- Self-interacting diffusions.
- Stochastic Estimation of the Maximum of a Regression Function
- Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise.
- Stochastic approximation with two time scales
- A differential equation for modeling Nesterov's accelerated gradient method: theory and insights
- Some methods of speeding up the convergence of iteration methods
- Nonconvergence to unstable points in urn models and stochastic approximations
- On the Long Time Behavior of Second Order Differential Equations with Asymptotically Small Dissipation
- On the long time behavior of second order differential equations with asymptotically small dissipation
- Large-scale machine learning with stochastic gradient descent
- Title not available (Why is that?)
- An adaptive scheme for the approximation of dissipative systems
- Title not available (Why is that?)
- Asymptotic pseudotrajectories and chain recurrent flows, with applications
- Do stochastic algorithms avoid traps?
- On the convergence of gradient-like flows with noisy gradient input
- A stochastic algorithm for feature selection in pattern recognition
- Title not available (Why is that?)
- Long time behaviour and stationary regime of memory gradient diffusions
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
Cited In (25)
- Title not available (Why is that?)
- Title not available (Why is that?)
- An adaptive Polyak heavy-ball method
- On the convergence analysis of aggregated heavy-ball method
- Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm
- On the rates of convergence of parallelized averaged stochastic gradient algorithms
- A general system of differential equations to model first-order adaptive algorithms
- Non asymptotic controls on a recursive superquantile approximation
- Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Title not available (Why is that?)
- Subgradient Sampling for Nonsmooth Nonconvex Minimization
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- Convergence and dynamical behavior of the ADAM algorithm for nonconvex stochastic optimization
- Title not available (Why is that?)
- Stochastic approximation algorithms for superquantiles estimation
- Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions
- Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces
- Stochastic heavy-ball method for constrained stochastic optimization problems
- Stochastic differential equations for modeling first order optimization methods
- A robust control approach to asymptotic optimality of the heavy ball method for optimization of quadratic functions
- Several kinds of acceleration techniques for unconstrained optimization first-order algorithms
- Nonsmooth nonconvex stochastic heavy ball
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- Differentially Private Accelerated Optimization Algorithms
This page was built for publication: Stochastic heavy ball
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1697485)