Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact
From MaRDI portal
Publication:6060544
DOI10.1007/s10287-023-00470-2arXiv2304.02442MaRDI QIDQ6060544
A. V. Gasnikov, Darina Dvinskikh, Pavel Dvurechensky, Nikita Kornilov
Publication date: 3 November 2023
Published in: Computational Management Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2304.02442
stochastic optimizationheavy tailsderivative-free optimizationnon-smooth problemszeroth-order optimizationgradient clippingstochastic mirror descent
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex
- Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises
- Universal method for stochastic composite optimization problems
- Noisy zeroth-order optimization for non-smooth saddle point problems
- On the upper bound for the expectation of the norm of a vector uniformly distributed on the sphere and the phenomenon of concentration of uniform measure on the sphere
- Algorithms of robust stochastic optimization based on mirror descent method
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Lectures on Modern Convex Optimization
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Introduction to Derivative-Free Optimization
- Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem
- Lectures on Stochastic Programming: Modeling and Theory, Third Edition
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
This page was built for publication: Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact