Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
From MaRDI portal
Publication:4629334
DOI10.1137/16M1106316zbMATH Open1436.90136arXiv1811.07057WikidataQ128268315 ScholiaQ128268315MaRDI QIDQ4629334FDOQ4629334
Authors: Coralia Cartis, Nick I. M. Gould, Philippe L. Toint
Publication date: 22 March 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods, that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly-constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving accuracy of the models. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.
Full work available at URL: https://arxiv.org/abs/1811.07057
Recommendations
- scientific article; zbMATH DE number 4046973
- Uniformly a posteriori error estimates for regularizing algorithms
- scientific article; zbMATH DE number 418826
- On regularization procedures with linear accuracy estimates of approximations
- Estimation of accuracy of finite-dimensional methods of regularization
- Universality of regularized regression estimators in high dimensions
- On elimination of accuracy saturation of regularizing algorithms
- Publication:3200483
- Smoothly varying regularization
- Conjugate gradient regularization under general smoothness and noise assumptions
Cites Work
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Trust Region Methods
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Regularity results for nonlinear elliptic systems and applications
- Universal gradient methods for convex optimization problems
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Cubic regularization of Newton method and its global performance
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Tensor Methods for Unconstrained Optimization Using Second Derivatives
- Accelerated methods for nonconvex optimization
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- On the use of third-order models with fourth-order regularization for unconstrained optimization
Cited In (35)
- An optimal high-order tensor method for convex optimization
- Inexact basic tensor methods for some classes of convex optimization problems
- On inexact solution of auxiliary problems in tensor methods for convex optimization
- Inexact tensor methods and their application to stochastic convex optimization
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- An adaptive regularization method in Banach spaces
- Adaptive Third-Order Methods for Composite Convex Optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- Complexity of a projected Newton-CG method for optimization with bounds
- Higher-order Newton methods with polynomial work per iteration
- Accelerated regularized Newton methods for minimizing composite convex functions
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Implementable tensor methods in unconstrained convex optimization
- Tensor methods for minimizing convex functions with Hölder continuous higher-order derivatives
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- A control-theoretic perspective on optimal high-order optimization
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- On large-scale unconstrained optimization and arbitrary regularization
- Hessian barrier algorithms for non-convex conic optimization
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Recent Theoretical Advances in Non-Convex Optimization
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- Perseus: a simple and optimal high-order method for variational inequalities
- Unified acceleration of high-order algorithms under general Hölder continuity
- Tensor methods for finding approximate stationary points of convex functions
- Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- On the complexity of an inexact restoration method for constrained optimization
- A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization
- The impact of noise on evaluation complexity: the deterministic trust-region case
- On the complexity of solving feasibility problems with regularized models
- On high-order model regularization for multiobjective optimization
Uses Software
This page was built for publication: Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4629334)