Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy

From MaRDI portal
Publication:4629334

DOI10.1137/16M1106316zbMATH Open1436.90136arXiv1811.07057WikidataQ128268315 ScholiaQ128268315MaRDI QIDQ4629334FDOQ4629334


Authors: Coralia Cartis, Nick I. M. Gould, Philippe L. Toint Edit this on Wikidata


Publication date: 22 March 2019

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly-constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving accuracy of the models. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.


Full work available at URL: https://arxiv.org/abs/1811.07057




Recommendations




Cites Work


Cited In (35)

Uses Software





This page was built for publication: Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4629334)