One-point gradient-free methods for smooth and non-smooth saddle-point problems
From MaRDI portal
Publication:2117626
DOI10.1007/978-3-030-77876-7_10zbMATH Open1487.90612arXiv2103.00321OpenAlexW3176874790MaRDI QIDQ2117626FDOQ2117626
Aleksandr Beznosikov, Vasilii Novitskii, Alexander V. Gasnikov
Publication date: 22 March 2022
Abstract: In this paper, we analyze gradient-free methods with one-point feedback for stochastic saddle point problems . For non-smooth and smooth cases, we present analysis in a general geometric setup with arbitrary Bregman divergence. For problems with higher-order smoothness, the analysis is carried out only in the Euclidean case. The estimates we have obtained repeat the best currently known estimates of gradient-free methods with one-point feedback for problems of imagining a convex or strongly convex function. The paper uses three main approaches to recovering the gradient through finite differences: standard with a random direction, as well as its modifications with kernels and residual feedback. We also provide experiments to compare these approaches for the matrix game.
Full work available at URL: https://arxiv.org/abs/2103.00321
Recommendations
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- Noisy zeroth-order optimization for non-smooth saddle point problems
- Projection generalized two-point extragradient quasi-Newton method for saddle-point and other problems
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Generalized gradient method for the determination of (?+?, ?+?)-saddle points
Stochastic programming (90C15) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33)
Cites Work
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Random gradient-free minimization of convex functions
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
Cited In (2)
This page was built for publication: One-point gradient-free methods for smooth and non-smooth saddle-point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2117626)