Stochastic Minimization with Constant Step-Size: Asymptotic Laws
From MaRDI portal
Publication:3725895
DOI10.1137/0324039zbMATH Open0594.90089OpenAlexW1991238514MaRDI QIDQ3725895FDOQ3725895
Authors: Georg Ch. Pflug
Publication date: 1986
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/0324039
Recommendations
asymptotic distributionstationary distributionMarkovian processconstant step-size constant gainconstrained stochastic approximation process
Cited In (12)
- Stochastic approximation with nondecaying gain: Error bound and data‐driven gain‐tuning
- Constant step stochastic approximations involving differential inclusions: stability, long-run convergence and applications
- Non-asymptotic error bounds for constant stepsize stochastic approximation for tracking mobile agents
- A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)
- Title not available (Why is that?)
- Asymptotic behavior of constrained stochastic approximations via the theory of large deviations
- Stochastic algorithms for the estimation of an optimal solution of a LP problem. Convergence and central limit theorem
- Non-asymptotic confidence bounds for stochastic approximation algorithms with constant step size
- Sampling from a log-concave distribution with projected Langevin Monte Carlo
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Estimation of an optimal solution of a LP problem with unknown objective function
- Stochastic approximation algorithms: overview and recent trends.
This page was built for publication: Stochastic Minimization with Constant Step-Size: Asymptotic Laws
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3725895)