Deep null space learning for inverse problems: convergence analysis and rates
From MaRDI portal
Publication:4625197
DOI10.1088/1361-6420/aaf14azbMath1491.65039arXiv1806.06137OpenAlexW3105109286WikidataQ128987752 ScholiaQ128987752MaRDI QIDQ4625197
Markus Haltmeier, Johannes Schwab, Stephan Antholzer
Publication date: 22 February 2019
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.06137
Artificial neural networks and deep learning (68T07) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20) Numerical solution to inverse problems in abstract spaces (65J22)
Related Items (12)
Invertible residual networks in the context of regularization theory for linear inverse problems ⋮ On Learning the Invisible in Photoacoustic Tomography with Flat Directionally Sensitive Detector ⋮ Numerical methods for identifying the diffusion coefficient in a nonlinear elliptic equation ⋮ Data-consistent neural networks for solving nonlinear inverse problems ⋮ Computed tomography reconstruction using deep image prior and learned reconstruction methods ⋮ Learning the invisible: a hybrid deep learning-shearlet framework for limited angle computed tomography ⋮ Data driven regularization by projection ⋮ Big in Japan: regularizing networks for solving inverse problems ⋮ Solving inverse problems using data-driven models ⋮ Estimating adsorption isotherm parameters in chromatography via a virtual injection promoting double feed-forward neural network ⋮ Hybrid projection methods for large-scale inverse problems with mixed Gaussian priors ⋮ On Learned Operator Correction in Inverse Problems
This page was built for publication: Deep null space learning for inverse problems: convergence analysis and rates