One-point gradient-free methods for smooth and non-smooth saddle-point problems

From MaRDI portal
Publication:2117626

DOI10.1007/978-3-030-77876-7_10zbMATH Open1487.90612arXiv2103.00321OpenAlexW3176874790MaRDI QIDQ2117626FDOQ2117626

Aleksandr Beznosikov, Vasilii Novitskii, Alexander V. Gasnikov

Publication date: 22 March 2022

Abstract: In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point problems minxmaxyvarphi(x,y). For non-smooth and smooth cases, we present analysis in a general geometric setup with arbitrary Bregman divergence. For problems with higher-order smoothness, the analysis is carried out only in the Euclidean case. The estimates we have obtained repeat the best currently known estimates of gradient-free methods with one-point feedback for problems of imagining a convex or strongly convex function. The paper uses three main approaches to recovering the gradient through finite differences: standard with a random direction, as well as its modifications with kernels and residual feedback. We also provide experiments to compare these approaches for the matrix game.


Full work available at URL: https://arxiv.org/abs/2103.00321




Recommendations




Cites Work


Cited In (2)





This page was built for publication: One-point gradient-free methods for smooth and non-smooth saddle-point problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117626)