A variational approach to stochastic minimization of convex functionals

From MaRDI portal
Publication:5146229

zbMATH Open1474.90326arXiv1605.03289MaRDI QIDQ5146229FDOQ5146229


Authors: Miroslav Bačák Edit this on Wikidata


Publication date: 25 January 2021

Abstract: Stochastic methods for minimizing a convex integral functional, as initiated by Robbins and Monro in the early 1950s, rely on the evaluation of a gradient (or subgradient if the function is not smooth) and moving in the corresponding direction. In contrast, we use a variational technique resulting in an implicit stochastic minimization method, which has recently appeared in several diverse contexts. Such an approach is desirable whenever the underlying space does not have a differentiable structure and moreover it exhibits better stability properties which makes it preferable even in linear spaces. Our results are formulated in locally compact Hadamard spaces, but they are new even in Euclidean space, the main novelty being more general growth conditions on the functional. We verify that the assumptions of our convergence theorem are satisfied in a few classical minimization problems.


Full work available at URL: https://arxiv.org/abs/1605.03289




Recommendations





Cited In (9)





This page was built for publication: A variational approach to stochastic minimization of convex functionals

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5146229)