Noisy zeroth-order optimization for non-smooth saddle point problems
From MaRDI portal
Publication:2104286
DOI10.1007/978-3-031-09607-5_2OpenAlexW4285199472MaRDI QIDQ2104286FDOQ2104286
Darina Dvinskikh, Iaroslav Tominin, Vladislav Tominin, Alexander V. Gasnikov
Publication date: 7 December 2022
Full work available at URL: https://doi.org/10.1007/978-3-031-09607-5_2
Recommendations
- Zeroth-order methods for noisy Hölder-gradient functions
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- One-point gradient-free methods for smooth and non-smooth saddle-point problems
Stochastic programming (90C15) Minimax problems in mathematical programming (90C47) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Title not available (Why is that?)
- Introduction to Stochastic Search and Optimization
- Random gradient-free minimization of convex functions
- Online convex optimization in the bandit setting: gradient descent without a gradient
- Title not available (Why is that?)
- Introduction to Derivative-Free Optimization
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
- Universal method for stochastic composite optimization problems
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Safe global optimization of expensive noisy black-box functions in the \(\delta \)-Lipschitz framework
- Lectures on Stochastic Programming: Modeling and Theory, Third Edition
- On the upper bound for the expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
Cited In (8)
- Accelerated zero-order SGD method for solving the black box optimization problem under ``overparametrization condition
- Stochastic adversarial noise in the ``black box optimization problem
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- The ``black-box optimization problem: zero-order accelerated stochastic method via kernel approximation
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
This page was built for publication: Noisy zeroth-order optimization for non-smooth saddle point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2104286)