Convergence of the gradient method for ill-posed problems

From MaRDI portal
Publication:2360784

DOI10.3934/IPI.2017033zbMATH Open1368.65082arXiv1606.00274OpenAlexW2963535588MaRDI QIDQ2360784FDOQ2360784


Authors: Stefan Kindermann Edit this on Wikidata


Publication date: 12 July 2017

Published in: Inverse Problems and Imaging (Search for Journal in Brave)

Abstract: We study the convergence of the gradient descent method for solving ill-posed problems where the solution is characterized as a global minimum of a differentiable functional in a Hilbert space. The classical least-squares functional for nonlinear operator equations is a special instance of this framework and the gradient method then reduces to Landweber iteration. The main result of this article is a proof of weak and strong convergence under new nonlinearity conditions that generalize the classical tangential cone conditions.


Full work available at URL: https://arxiv.org/abs/1606.00274




Recommendations




Cites Work


Cited In (20)





This page was built for publication: Convergence of the gradient method for ill-posed problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2360784)