Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
From MaRDI portal
Publication:2397263
DOI10.1134/S0005117917020035zbMath1362.93165MaRDI QIDQ2397263
A. A. Lagunovskaya, F. A. Fedorenko, E. A. Krymova, Alexander V. Gasnikov, I. N. Usmanova
Publication date: 22 May 2017
Published in: Automation and Remote Control (Search for Journal in Brave)
Stochastic programming (90C15) Discrete-time control/observation systems (93C55) Optimal stochastic control (93E20)
Related Items (19)
Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises ⋮ Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference ⋮ Improved exploitation of higher order smoothness in derivative-free optimization ⋮ An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization ⋮ Primal-dual mirror descent method for constraint stochastic optimization problems ⋮ Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems ⋮ Gradient-free methods for non-smooth convex stochastic optimization with heavy-tailed noise on convex compact ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization ⋮ Unifying framework for accelerated randomized methods in convex optimization ⋮ Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points ⋮ An accelerated directional derivative method for smooth stochastic convex optimization ⋮ Accelerated directional search with non-Euclidean prox-structure ⋮ Gradient-Free Methods with Inexact Oracle for Convex-Concave Stochastic Saddle-Point Problem ⋮ Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems ⋮ A new one-point residual-feedback oracle for black-box learning and control ⋮ Derivative-free optimization methods ⋮ Noisy zeroth-order optimization for non-smooth saddle point problems ⋮ One-point gradient-free methods for smooth and non-smooth saddle-point problems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex
- Lectures on Modern Convex Optimization
- On Martingale Extensions of Vapnik–Chervonenkis Theory with Applications to Online Learning
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Online Learning and Online Convex Optimization
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
- Prediction, Learning, and Games
This page was built for publication: Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case