Convergence rates in1-regularization if the sparsity assumption fails

From MaRDI portal
Publication:4917606

DOI10.1088/0266-5611/29/2/025013zbMath1262.49010arXiv1209.5732OpenAlexW3103197010MaRDI QIDQ4917606

Bernd Hofmann, Jens Flemming, Martin Burger

Publication date: 2 May 2013

Published in: Inverse Problems (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1209.5732




Related Items (24)

Oversmoothing regularization with \(\ell^1\)-penalty termSparsity regularization of the diffusion coefficient identification problem: well-posedness and convergence ratesOn ℓ 1 -Regularization Under Continuity of the Forward Operator in Weaker TopologiesPenalty-based smoothness conditions in convex variational regularizationTikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scalesEfficient regularization with wavelet sparsity constraints in photoacoustic tomographyFlexible sparse regularizationElastic-net regularization versus 1 -regularization for linear inverse problems with quasi-sparse solutionsOptimal convergence rates for sparsity promoting wavelet-regularization in Besov spacesConvergence rates of Tikhonov regularizations for elliptic and parabolic inverse radiativity problemsConvergence rates of Tikhonov regularization for recovering growth rates in a Lotka-Volterra competition model with diffusionVariational source conditions and stability estimates for inverse electromagnetic medium scattering problemsExistence of variational source conditions for nonlinear inverse problems in Banach spacesVariational source condition for ill-posed backward nonlinear Maxwell’s equationsInjectivity and \(\text{weak}^\star\)-to-weak continuity suffice for convergence rates in \(\ell^{1}\)-regularizationMultiscale scanning in inverse problems$ \newcommand{\e}{{\rm e}} {\alpha\ell_{1}-\beta\ell_{2}}$ regularization for sparse recoveryConvergence analysis of (statistical) inverse problems under conditional stability estimatesMaximal spaces for approximation rates in \(\ell^1\)-regularizationConvergence rates in1-regularization when the basis is not smooth enoughModern regularization methods for inverse problemsTikhonov regularization with \({\ell^{0}}\)-term complementing a convex penalty: \({\ell^{1}}\)-convergence under sparsity constraintsVariational source conditions for inverse Robin and flux problems by partial measurementsRegularization properties of the sequential discrepancy principle for Tikhonov regularization in Banach spaces




This page was built for publication: Convergence rates in1-regularization if the sparsity assumption fails