Convergence Rates of Finite Difference Stochastic Approximation Algorithms

From MaRDI portal
Publication:6263172

arXiv1506.09211MaRDI QIDQ6263172FDOQ6263172

Liyi Dai

Publication date: 30 June 2015

Abstract: Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n2/5 in general and to n1/2 in Monte Carlo optimization for a broad class of problems, in the iteration number n.













This page was built for publication: Convergence Rates of Finite Difference Stochastic Approximation Algorithms

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6263172)