Kernel-based methods for bandit convex optimization
From MaRDI portal
Publication:4977962
Abstract: We consider the adversarial convex bandit problem and we build the first -time algorithm with -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves -regret, and we show that a simple variant of this algorithm can be run in -time per step at the cost of an additional factor in the regret. These results improve upon the -regret and -time result of the first two authors, and the -regret and -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve -regret, and moreover that this regret is unimprovable (the current best lower bound being and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order .
Recommendations
- Kernel-based Methods for Bandit Convex Optimization
- Kernel estimation and model combination in a bandit problem with covariates
- Stochastic convex optimization with bandit feedback
- Bandit convex optimization in non-stationary environments
- Bandwidth selection in kernel empirical risk minimization via the gradient
- Optimizing Kernel Methods: A Unifying Variational Principle
- Optimal and robust kernel algorithms for passive stochastic approximation
- On convergence of kernel learning estimators
Cited in
(15)- Improved exploitation of higher order smoothness in derivative-free optimization
- Improved regret for zeroth-order adversarial bandit convex optimisation
- Derivative-free optimization methods
- Technical note: Nonstationary stochastic optimization under \(L_{p,q} \)-variation measures
- Adversarial bandits with knapsacks
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference
- Optimal multi-unit mechanisms with private demands
- Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities
- Distributed zeroth-order optimization: convergence rates that match centralized counterpart
- Stochastic convex optimization with bandit feedback
- Kernel-based Methods for Bandit Convex Optimization
- Exploratory distributions for convex functions
- Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs
- An accelerated method for derivative-free smooth stochastic convex optimization
This page was built for publication: Kernel-based methods for bandit convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4977962)