Exploiting higher-order derivatives in convex optimization methods
From MaRDI portal
Publication:6506474
arXiv2208.13190MaRDI QIDQ6506474FDOQ6506474
Authors: Dmitry Kamzolov, A. V. Gasnikov, Pavel Dvurechensky, Artem Agafonov, M. Takáč
Abstract: Exploiting higher-order derivatives in convex optimization is known at least since 1970's. In each iteration higher-order (also called tensor) methods minimize a regularized Taylor expansion of the objective function, which leads to faster convergence rates if the corresponding higher-order derivative is Lipschitz-continuous. Recently a series of lower iteration complexity bounds for such methods were proved, and a gap between upper an lower complexity bounds was revealed. Moreover, it was shown that such methods can be implementable since the appropriately regularized Taylor expansion of a convex function is also convex and, thus, can be minimized in polynomial time. Only very recently an algorithm with optimal convergence rate was proposed for minimizing convex functions with Lipschitz -th derivative. For convex functions with Lipschitz third derivative, these developments allowed to propose a second-order method with convergence rate , which is faster than the rate of existing second-order methods.
This page was built for publication: Exploiting higher-order derivatives in convex optimization methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6506474)