Gradient descent in a generalised Bregman distance framework
From MaRDI portal
Publication:6280587
arXiv1612.02506MaRDI QIDQ6280587FDOQ6280587
Authors: Martin Benning, M. M. Betcke, Matthias J. Ehrhardt, Carola-Bibiane Schönlieb
Publication date: 7 December 2016
Abstract: We discuss a special form of gradient descent that in the literature has become known as the so-called linearised Bregman iteration. The idea is to replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a more general proper, convex and lower semi-continuous functional. Gradient descent as well as the entropic mirror descent by Nemirovsky and Yudin are special cases, as is a specific form of non-linear Landweber iteration introduced by Bachmayr and Burger. We are going to analyse the linearised Bregman iteration in a setting where the functional we want to minimise is neither necessarily Lipschitz-continuous (in the classical sense) nor necessarily convex, and establish a global convergence result under the additional assumption that the functional we wish to minimise satisfies the so-called Kurdyka-{L}ojasiewicz property.
Numerical mathematical programming methods (65K05) Numerical optimization and variational techniques (65K10) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
This page was built for publication: Gradient descent in a generalised Bregman distance framework
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6280587)