Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
From MaRDI portal
Publication:4965105
DOI10.1007/978-3-030-58657-7_11zbMath1460.90118arXiv2005.05913OpenAlexW3086270874MaRDI QIDQ4965105
Abdurakhmon Sadiev, Aleksandr Beznosikov, Alexander V. Gasnikov
Publication date: 25 February 2021
Published in: Mathematical Optimization Theory and Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.05913
Stochastic programming (90C15) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33)
Related Items
Improved exploitation of higher order smoothness in derivative-free optimization ⋮ An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization ⋮ Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact ⋮ Zeroth-order single-loop algorithms for nonconvex-linear minimax problems ⋮ Accelerated gradient methods with absolute and relative noise in the gradient ⋮ Recent theoretical advances in decentralized distributed convex optimization ⋮ Noisy zeroth-order optimization for non-smooth saddle point problems ⋮ One-point gradient-free methods for smooth and non-smooth saddle-point problems
Cites Work
- Unnamed Item
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex
- Accelerated methods for saddle-point problem
- Accelerated gradient-free optimization methods with a non-Euclidean proximal operator
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Lectures on Modern Convex Optimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback