Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal-dual tensor methods

From MaRDI portal
Publication:6330635

arXiv1912.03381MaRDI QIDQ6330635FDOQ6330635


Authors: Pavel Dvurechensky, Petr Ostroukhov, A. V. Gasnikov, César A. Uribe, Anastasiya Ivanova Edit this on Wikidata


Publication date: 6 December 2019

Abstract: Motivated by convex problems with linear constraints and, in particular, by entropy-regularized optimal transport, we consider the problem of finding varepsilon-approximate stationary points, i.e., points where the norm of the objective gradient is less or equal than varepsilon, of convex functions with Lipschitz p-th order derivatives. Lower complexity bounds for this problem were recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the methods presented in the same paper do not match the optimal complexity bounds. We close this gap by proposing two optimal (up to logarithmic factors) methods with complexity bounds ildeO(varepsilon2(p+1)/(3p+1)) and ildeO(varepsilon2/(3p+1)) with respect to the initial objective residual and the distance between the starting point and solution respectively. We also propose an accelerated primal-dual tensor method for convex problems with linear equality constraints, where the dual objective has Lipschitz p-th order derivatives. For this algorithm, we prove ildeO(varepsilon1/(p+1)) complexity in terms of the primal objective residual and the residual in the constraints. In the experiments, we illustrate the practical performance of the proposed algorithms.













This page was built for publication: Near-optimal tensor methods for minimizing the gradient norm of convex functions and accelerated primal-dual tensor methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6330635)