Convergence rates inℓ1-regularization if the sparsity assumption fails
From MaRDI portal
Publication:4917606
DOI10.1088/0266-5611/29/2/025013zbMath1262.49010arXiv1209.5732OpenAlexW3103197010MaRDI QIDQ4917606
Bernd Hofmann, Jens Flemming, Martin Burger
Publication date: 2 May 2013
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1209.5732
Related Items (24)
Oversmoothing regularization with \(\ell^1\)-penalty term ⋮ Sparsity regularization of the diffusion coefficient identification problem: well-posedness and convergence rates ⋮ On ℓ 1 -Regularization Under Continuity of the Forward Operator in Weaker Topologies ⋮ Penalty-based smoothness conditions in convex variational regularization ⋮ Tikhonov regularization with oversmoothing penalty for non-linear ill-posed problems in Hilbert scales ⋮ Efficient regularization with wavelet sparsity constraints in photoacoustic tomography ⋮ Flexible sparse regularization ⋮ Elastic-net regularization versus ℓ 1 -regularization for linear inverse problems with quasi-sparse solutions ⋮ Optimal convergence rates for sparsity promoting wavelet-regularization in Besov spaces ⋮ Convergence rates of Tikhonov regularizations for elliptic and parabolic inverse radiativity problems ⋮ Convergence rates of Tikhonov regularization for recovering growth rates in a Lotka-Volterra competition model with diffusion ⋮ Variational source conditions and stability estimates for inverse electromagnetic medium scattering problems ⋮ Existence of variational source conditions for nonlinear inverse problems in Banach spaces ⋮ Variational source condition for ill-posed backward nonlinear Maxwell’s equations ⋮ Injectivity and \(\text{weak}^\star\)-to-weak continuity suffice for convergence rates in \(\ell^{1}\)-regularization ⋮ Multiscale scanning in inverse problems ⋮ $ \newcommand{\e}{{\rm e}} {\alpha\ell_{1}-\beta\ell_{2}}$ regularization for sparse recovery ⋮ Convergence analysis of (statistical) inverse problems under conditional stability estimates ⋮ Maximal spaces for approximation rates in \(\ell^1\)-regularization ⋮ Convergence rates inℓ1-regularization when the basis is not smooth enough ⋮ Modern regularization methods for inverse problems ⋮ Tikhonov regularization with \({\ell^{0}}\)-term complementing a convex penalty: \({\ell^{1}}\)-convergence under sparsity constraints ⋮ Variational source conditions for inverse Robin and flux problems by partial measurements ⋮ Regularization properties of the sequential discrepancy principle for Tikhonov regularization in Banach spaces
This page was built for publication: Convergence rates inℓ1-regularization if the sparsity assumption fails