Gradient regularization of Newton method with Bregman distances
From MaRDI portal
Publication:6201850
Abstract: In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite optimization problem, we establish the global convergence rate of the order both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method, which preserves its convergence properties, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive line search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate , where is the iteration counter.
Recommendations
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- Super-Universal Regularized Newton Method
- Accelerating the cubic regularization of Newton's method on convex problems
- Minimizing uniformly convex functions by cubic regularization of Newton method
- On the convergence of a modified regularized Newton method for convex optimization with singular solutions
Cites work
- scientific article; zbMATH DE number 2146948 (Why is no real title available?)
- scientific article; zbMATH DE number 852532 (Why is no real title available?)
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Contracting proximal methods for smooth convex optimization
- Cubic regularization of Newton method and its global performance
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Lectures on convex optimization
- Regularized Newton method for unconstrained convex optimization
- Smooth minimization of non-smooth functions
Cited in
(2)
This page was built for publication: Gradient regularization of Newton method with Bregman distances
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6201850)