Super-Universal Regularized Newton Method
From MaRDI portal
Publication:6136654
Abstract: We analyze the performance of a variant of Newton method with quadratic regularization for solving composite convex minimization problems. At each step of our method, we choose regularization parameter proportional to a certain power of the gradient norm at the current point. We introduce a family of problem classes characterized by H"older continuity of either the second or third derivative. Then we present the method with a simple adaptive search procedure allowing an automatic adjustment to the problem class with the best global complexity bounds, without knowing specific parameters of the problem. In particular, for the class of functions with Lipschitz continuous third derivative, we get the global rate, which was previously attributed to third-order tensor methods. When the objective function is uniformly convex, we justify an automatic acceleration of our scheme, resulting in a faster global rate and local superlinear convergence. The switching between the different rates (sublinear, linear, and superlinear) is automatic. Again, for that, no a priori knowledge of parameters is needed.
Recommendations
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Gradient regularization of Newton method with Bregman distances
- Accelerated regularized Newton methods for minimizing composite convex functions
- Accelerating the cubic regularization of Newton's method on convex problems
Cites work
- scientific article; zbMATH DE number 729680 (Why is no real title available?)
- scientific article; zbMATH DE number 3052543 (Why is no real title available?)
- A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
- A method for the solution of certain non-linear problems in least squares
- A regularized Newton method without line search for unconstrained optimization
- Accelerated regularized Newton methods for minimizing composite convex functions
- Accelerating the cubic regularization of Newton's method on convex problems
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Affine-invariant contracting-point methods for convex optimization
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods
- Contracting proximal methods for smooth convex optimization
- Cubic regularization of Newton method and its global performance
- Implementable tensor methods in unconstrained convex optimization
- Inexact accelerated high-order proximal-point methods
- Iterative Solution of Nonlinear Equations in Several Variables
- Lectures on convex optimization
- Local convergence of tensor methods
- Maximization by Quadratic Hill-Climbing
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Near-optimal hyperfast second-order method for convex optimization
- Newton's method and its use in optimization
- On inexact solution of auxiliary problems in tensor methods for convex optimization
- Oracle complexity of second-order methods for smooth convex optimization
- Regularized Newton method for unconstrained convex optimization
- Regularized Newton methods for minimizing functions with Hölder continuous hessians
- Relatively smooth convex optimization by first-order methods, and applications
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- Superfast second-order methods for unconstrained convex optimization
- Tensor methods for minimizing convex functions with Hölder continuous higher-order derivatives
- Trust Region Methods
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
This page was built for publication: Super-Universal Regularized Newton Method
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136654)