Gradient regularization of Newton method with Bregman distances
From MaRDI portal
Publication:6201850
DOI10.1007/S10107-023-01943-7arXiv2112.02952OpenAlexW4360829127MaRDI QIDQ6201850FDOQ6201850
Publication date: 21 February 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite optimization problem, we establish the global convergence rate of the order both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method, which preserves its convergence properties, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive line search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate , where is the iteration counter.
Full work available at URL: https://arxiv.org/abs/2112.02952
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Smooth minimization of non-smooth functions
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Cubic regularization of Newton method and its global performance
- Regularized Newton method for unconstrained convex optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Lectures on convex optimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Contracting Proximal Methods for Smooth Convex Optimization
Cited In (1)
This page was built for publication: Gradient regularization of Newton method with Bregman distances
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6201850)