Convergence rates for Tikhonov regularization from different kinds of smoothness conditions

From MaRDI portal
Publication:5474561

DOI10.1080/00036810500474838zbMath1110.65041OpenAlexW2043258083WikidataQ58190375 ScholiaQ58190375MaRDI QIDQ5474561

Bernd Hofmann, Ulrich Tautenhahn, Albrecht Böttcher, Masahiro Yamamoto

Publication date: 26 June 2006

Published in: Applicable Analysis (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1080/00036810500474838




Related Items (24)

Mathematical framework for traction force microscopyOn the analysis of distance functions for linear ill-posed problems with an application to the integration operator inL2Local solutions to inverse problems in geodesy. The impact of the noise covariance structure upon the accuracy of estimationTikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problemsEstimation of linear functionals from indirect noisy data without knowledge of the noise levelInverse learning in Hilbert scalesImpact of conditional stability: Convergence rates for general linear regularization methodsData driven regularization by projectionConvergence rates of a multilevel method for the regularization of nonlinear ill-posed problemsConditional Stability Estimates for Ill-Posed PDE Problems by Using InterpolationRegularized collocation method for Fredholm integral equations of the first kindConvergence results and low-order rates for nonlinear Tikhonov regularization with oversmoothing penalty termOn the quasioptimal regularization parameter choices for solving ill-posed problemsRegularization by projection: Approximation theoretic aspects and distance functionsRange Inclusions and Approximate Source Conditions with General Benchmark FunctionsThe index function and Tikhonov regularization for ill-posed problemsDirect and inverse results in variable Hilbert scalesRegularization by projection in variable Hilbert scalesConvergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel LearningBayesian inverse problems with non-commuting operatorsSparse recovery by the standard Tikhonov methodFurther convergence results on the general iteratively regularized Gauss-Newton methods under the discrepancy principleA unified treatment for Tikhonov regularization using a general stabilizing operatorConditional stability stopping rule for gradient methods applied to inverse and ill-posed problems



Cites Work


This page was built for publication: Convergence rates for Tikhonov regularization from different kinds of smoothness conditions