Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution
From MaRDI portal
Publication:777510
DOI10.1007/S00211-020-01122-2zbMATH Open1453.65124arXiv1811.06721OpenAlexW2901152527MaRDI QIDQ777510FDOQ777510
Authors: Bastian Harrach, Tim Jahn, Roland Potthast
Publication date: 7 July 2020
Published in: Numerische Mathematik (Search for Journal in Brave)
Abstract: This article deals with the solution of linear ill-posed equations in Hilbert spaces. Often, one only has a corrupted measurement of the right hand side at hand and the Bakushinskii veto tells us, that we are not able to solve the equation if we do not know the noise level. But in applications it is ad hoc unrealistic to know the error of a measurement. In practice, the error of a measurement may often be estimated through averaging of multiple measurements. We integrated that in our anlaysis and obtained convergence to the true solution, with the only assumption that the measurements are unbiased, independent and identically distributed according to an unknown distribution.
Full work available at URL: https://arxiv.org/abs/1811.06721
Recommendations
- Regularizing linear inverse problems under unknown non-Gaussian white noise allowing repeated measurements
- Regularization of statistical inverse problems and the Bakushinskiĭ veto
- Regularization of some linear ill-posed problems with discretized random noisy data
- On weakly bounded noise in ill-posed problems
- Noise Level Free Regularization of General Linear Inverse Problems under Unconstrained White Noise
Cites Work
- Mathematical foundations of infinite-dimensional statistical models
- Title not available (Why is that?)
- Fundamentals of nonparametric Bayesian inference
- Regularization independent of the noise level: an analysis of quasi-optimality
- Discrepancy based model selection in statistical inverse problems
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Nonstationary inverse problems and state estimation
- Geometry of linear ill-posed problems in variable Hilbert scales
- Title not available (Why is that?)
- Risk hull method and regularization by projections of ill-posed inverse problems
- Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy
- Discrete inverse problems. Insight and algorithms.
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Title not available (Why is that?)
- Inverse Problems Light: Numerical Differentiation
- Risk estimators for choosing regularization parameters in ill-posed problems -- properties and limitations
- Inverse modeling. An introduction to the theory and methods of inverse problems and data assimilation
- On the lifting of deterministic convergence rates for inverse problems with stochastic noise
- Regularization of statistical inverse problems and the Bakushinskiĭ veto
- Adaptivity and oracle inequalities in linear statistical inverse problems: a (numerical) survey
- Optimal adaptation for early stopping in statistical inverse problems
- Title not available (Why is that?)
Cited In (11)
- Optimal Convergence of the Discrepancy Principle for Polynomially and Exponentially Ill-Posed Operators under White Noise
- Regularizing linear inverse problems under unknown non-Gaussian white noise allowing repeated measurements
- Noise Level Free Regularization of General Linear Inverse Problems under Unconstrained White Noise
- Dual gradient method for ill-posed problems using multiple repeated measurement data
- A modified discrepancy principle to attain optimal convergence rates under unknown noise
- On the asymptotical regularization for linear inverse problems in presence of white noise
- A probabilistic oracle inequality and quantification of uncertainty of a modified discrepancy principle for statistical inverse problems
- On the discrepancy principle for stochastic gradient descent
- Towards adaptivity via a new discrepancy principle for Poisson inverse problems
- Weighted discrepancy principle and optimal adaptivity in Poisson inverse problems
- Robust recovery of a kind of weighted l1-minimization without noise level
This page was built for publication: Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q777510)