A discrepancy principle for the Landweber iteration based on risk minimization
From MaRDI portal
Publication:2274803
DOI10.1016/j.aml.2019.04.005zbMath1429.65118OpenAlexW2936828696WikidataQ128057000 ScholiaQ128057000MaRDI QIDQ2274803
Federico Benvenuto, Cristina Campi
Publication date: 1 October 2019
Published in: Applied Mathematics Letters (Search for Journal in Brave)
Full work available at URL: http://hdl.handle.net/11567/961489
regularizationinverse problemsstopping rulerisk minimizationwhite Gaussian noiseLandweber algorithmlinear system with noisy datasignal formation model
Related Items (2)
Maximum Likelihood Estimation of Regularization Parameters in High-Dimensional Inverse Problems: An Empirical Bayesian Approach Part I: Methodology and Experiments ⋮ Predictive risk estimation for the expectation maximization algorithm with Poisson data
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Iterative solution of large sparse systems of equations
- Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates
- On Minimization Strategies for Choice of the Regularization Parameter in Ill-Posed Problems
- Generalized Cross-Validation as a Method for Choosing a Good Ridge Parameter
- The Use of the L-Curve in the Regularization of Discrete Ill-Posed Problems
- Computational Methods for Inverse Problems
- Some Comments on C P
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- Linear integral equations
This page was built for publication: A discrepancy principle for the Landweber iteration based on risk minimization