A universal black-box optimization method with almost dimension-free convergence rate guarantees
From MaRDI portal
Publication:6402499
arXiv2206.09352MaRDI QIDQ6402499FDOQ6402499
Authors: Kimon Antonakopoulos, Dong Quan Vu, Vokan Cevher, Kfir Y. Levy, Panayotis Mertikopoulos
Publication date: 19 June 2022
Abstract: Universal methods for optimization are designed to achieve theoretically optimal convergence rates without any prior knowledge of the problem's regularity parameters or the accurarcy of the gradient oracle employed by the optimizer. In this regard, existing state-of-the-art algorithms achieve an value convergence rate in Lipschitz smooth problems with a perfect gradient oracle, and an convergence rate when the underlying problem is non-smooth and/or the gradient oracle is stochastic. On the downside, these methods do not take into account the problem's dimensionality, and this can have a catastrophic impact on the achieved convergence rate, in both theory and practice. Our paper aims to bridge this gap by providing a scalable universal gradient method - dubbed UnderGrad - whose oracle complexity is almost dimension-free in problems with a favorable geometry (like the simplex, linearly constrained semidefinite programs and combinatorial bandits), while retaining the order-optimal dependence on described above. These "best-of-both-worlds" results are achieved via a primal-dual update scheme inspired by the dual exploration method for variational inequalities.
Learning and adaptive systems in artificial intelligence (68T05) Convex programming (90C25) Computational learning theory (68Q32) Stochastic programming (90C15)
This page was built for publication: A universal black-box optimization method with almost dimension-free convergence rate guarantees
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6402499)