Invertible residual networks in the context of regularization theory for linear inverse problems
From MaRDI portal
Publication:6141568
Abstract: Learned inverse problem solvers exhibit remarkable performance in applications like image reconstruction tasks. These data-driven reconstruction methods often follow a two-step scheme. First, one trains the often neural network-based reconstruction scheme via a dataset. Second, one applies the scheme to new measurements to obtain reconstructions. We follow these steps but parameterize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility enables investigating the influence of the training and architecture choices on the resulting reconstruction scheme. For example, assuming local approximation properties of the network, we show that these schemes become convergent regularizations. In addition, the investigations reveal a formal link to the linear regularization theory of linear inverse problems and provide a nonlinear spectral regularization for particular architecture classes. On the numerical side, we investigate the local approximation property of selected trained architectures and present a series of experiments on the MNIST dataset that underpin and extend our theoretical findings.
Recommendations
- Bayesian view on the training of invertible residual networks for solving linear inverse problems
- Learned regularizers for inverse problems
- Solving ill-posed inverse problems using iterative deep neural networks
- Big in Japan: regularizing networks for solving inverse problems
- NETT: solving inverse problems with deep neural networks
Cites work
- scientific article; zbMATH DE number 41289 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- Bayesian Imaging Using Plug & Play Priors: When Langevin Meets Tweedie
- CLIP: cheap Lipschitz training of neural networks
- Data driven regularization by projection
- Data-driven nonsmooth optimization
- Deep Convolutional Neural Network for Inverse Problems in Imaging
- Deep null space learning for inverse problems: convergence analysis and rates
- Designing optimal spectral filters for inverse problems
- How general are general source conditions?
- Learning maximally monotone operators for image recovery
- Modern regularization methods for inverse problems
- NETT: solving inverse problems with deep neural networks
- Regularization by architecture: a deep prior approach for inverse problems
- Regularization methods in Banach spaces.
- Regularization of inverse problems by filtered diagonal frame decomposition
- Regularization theory of the analytic deep prior approach
- Solving inverse problems using data-driven models
Cited in
(6)- Fixed-point algorithms for inverse of residual rectifier neural networks
- Bayesian view on the training of invertible residual networks for solving linear inverse problems
- Regularization by architecture: a deep prior approach for inverse problems
- Deep unfolding as iterative regularization for imaging inverse problems
- Neural-network-based regularization methods for inverse problems in imaging
- Convergence of non-linear diagonal frame filtering for regularizing inverse problems
This page was built for publication: Invertible residual networks in the context of regularization theory for linear inverse problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6141568)