Convergence Rates of Finite Difference Stochastic Approximation Algorithms
From MaRDI portal
Publication:6263172
arXiv1506.09211MaRDI QIDQ6263172FDOQ6263172
Publication date: 30 June 2015
Abstract: Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to in general and to in Monte Carlo optimization for a broad class of problems, in the iteration number n.
This page was built for publication: Convergence Rates of Finite Difference Stochastic Approximation Algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6263172)