Invertible residual networks in the context of regularization theory for linear inverse problems
DOI10.1088/1361-6420/AD0660arXiv2306.01335MaRDI QIDQ6141568
Sören Dittmer, Clemens Arndt, Judith Nickel, Alexander Denker, Tobias Kluth, Unnamed Author, Unnamed Author, Peter Maass
Publication date: 20 December 2023
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2306.01335
local approximation propertyinvertible residual networksconvergent regularizationlearned nonlinear spectral regularizationlearning for inverse problems
Artificial neural networks and deep learning (68T07) Numerical solutions of ill-posed problems in abstract spaces; regularization (65J20) Numerical solution to inverse problems in abstract spaces (65J22)
Cites Work
- Unnamed Item
- Unnamed Item
- Regularization methods in Banach spaces.
- CLIP: cheap Lipschitz training of neural networks
- Regularization by architecture: a deep prior approach for inverse problems
- Regularization of inverse problems by filtered diagonal frame decomposition
- Designing Optimal Spectral Filters for Inverse Problems
- Deep Convolutional Neural Network for Inverse Problems in Imaging
- Deep null space learning for inverse problems: convergence analysis and rates
- Regularization theory of the analytic deep prior approach
- Bayesian Imaging Using Plug & Play Priors: When Langevin Meets Tweedie
- NETT: solving inverse problems with deep neural networks
- Data driven regularization by projection
- Data-Driven Nonsmooth Optimization
- Modern regularization methods for inverse problems
- Solving inverse problems using data-driven models
- How general are general source conditions?
- Learning Maximally Monotone Operators for Image Recovery
This page was built for publication: Invertible residual networks in the context of regularization theory for linear inverse problems