Stochastic Minimization with Constant Step-Size: Asymptotic Laws
From MaRDI portal
Publication:3725895
Recommendations
Cited in
(12)- Stochastic approximation with nondecaying gain: Error bound and data‐driven gain‐tuning
- Constant step stochastic approximations involving differential inclusions: stability, long-run convergence and applications
- Non-asymptotic error bounds for constant stepsize stochastic approximation for tracking mobile agents
- A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)
- Asymptotic behavior of constrained stochastic approximations via the theory of large deviations
- scientific article; zbMATH DE number 7733450 (Why is no real title available?)
- Stochastic algorithms for the estimation of an optimal solution of a LP problem. Convergence and central limit theorem
- Non-asymptotic confidence bounds for stochastic approximation algorithms with constant step size
- Sampling from a log-concave distribution with projected Langevin Monte Carlo
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Estimation of an optimal solution of a LP problem with unknown objective function
- Stochastic approximation algorithms: overview and recent trends.
This page was built for publication: Stochastic Minimization with Constant Step-Size: Asymptotic Laws
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3725895)