Kernel-based methods for bandit convex optimization
DOI10.1145/3055399.3055403zbMATH Open1370.90175arXiv1607.03084OpenAlexW2963831922MaRDI QIDQ4977962FDOQ4977962
Authors: Sébastien Bubeck, Yin Tat Lee, Ronen Eldan
Publication date: 17 August 2017
Published in: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1607.03084
Recommendations
- Kernel-based Methods for Bandit Convex Optimization
- Kernel estimation and model combination in a bandit problem with covariates
- Stochastic convex optimization with bandit feedback
- Bandit convex optimization in non-stationary environments
- Bandwidth selection in kernel empirical risk minimization via the gradient
- Optimizing Kernel Methods: A Unifying Variational Principle
- Optimal and robust kernel algorithms for passive stochastic approximation
- On convergence of kernel learning estimators
Learning and adaptive systems in artificial intelligence (68T05) Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Probabilistic games; gambling (91A60)
Cited In (15)
- Kernel-based Methods for Bandit Convex Optimization
- Optimal multi-unit mechanisms with private demands
- Improved exploitation of higher order smoothness in derivative-free optimization
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference
- Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities
- An accelerated method for derivative-free smooth stochastic convex optimization
- Improved regret for zeroth-order adversarial bandit convex optimisation
- Exploratory distributions for convex functions
- Technical note: Nonstationary stochastic optimization under \(L_{p,q} \)-variation measures
- Stochastic convex optimization with bandit feedback
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
- Derivative-free optimization methods
- Adversarial bandits with knapsacks
This page was built for publication: Kernel-based methods for bandit convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4977962)