Asymptotic bias of stochastic gradient search

From MaRDI portal
Publication:1704136




Abstract: The asymptotic behavior of the stochastic gradient algorithm with a biased gradient estimator is analyzed. Relying on arguments based on the dynamic system theory (chain-recurrence) and the differential geometry (Yomdin theorem and Lojasiewicz inequality), tight bounds on the asymptotic bias of the iterates generated by such an algorithm are derived. The obtained results hold under mild conditions and cover a broad class of high-dimensional nonlinear algorithms. Using these results, the asymptotic properties of the policy-gradient (reinforcement) learning and adaptive population Monte Carlo sampling are studied. Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.



Cites work


Cited in
(17)






This page was built for publication: Asymptotic bias of stochastic gradient search

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1704136)