Kernel-based methods for bandit convex optimization
From MaRDI portal
Publication:4977962
Abstract: We consider the adversarial convex bandit problem and we build the first -time algorithm with -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves -regret, and we show that a simple variant of this algorithm can be run in -time per step at the cost of an additional factor in the regret. These results improve upon the -regret and -time result of the first two authors, and the -regret and -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve -regret, and moreover that this regret is unimprovable (the current best lower bound being and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order .
Recommendations
- Kernel-based Methods for Bandit Convex Optimization
- Kernel estimation and model combination in a bandit problem with covariates
- Stochastic convex optimization with bandit feedback
- Bandit convex optimization in non-stationary environments
- Bandwidth selection in kernel empirical risk minimization via the gradient
- Optimizing Kernel Methods: A Unifying Variational Principle
- Optimal and robust kernel algorithms for passive stochastic approximation
- On convergence of kernel learning estimators
Cited in
(15)- Kernel-based Methods for Bandit Convex Optimization
- Optimal multi-unit mechanisms with private demands
- Improved exploitation of higher order smoothness in derivative-free optimization
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities
- An accelerated method for derivative-free smooth stochastic convex optimization
- Improved regret for zeroth-order adversarial bandit convex optimisation
- Exploratory distributions for convex functions
- Technical note: Nonstationary stochastic optimization under \(L_{p,q} \)-variation measures
- Stochastic convex optimization with bandit feedback
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
- Derivative-free optimization methods
- Adversarial bandits with knapsacks
This page was built for publication: Kernel-based methods for bandit convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4977962)