Invertible residual networks in the context of regularization theory for linear inverse problems

From MaRDI portal
Publication:6141568

DOI10.1088/1361-6420/AD0660arXiv2306.01335MaRDI QIDQ6141568FDOQ6141568

Sören Dittmer, Clemens Arndt, Judith Nickel, Alexander Denker, Tobias Kluth, Author name not available (Why is that?), Author name not available (Why is that?), Peter Maass

Publication date: 20 December 2023

Published in: Inverse Problems (Search for Journal in Brave)

Abstract: Learned inverse problem solvers exhibit remarkable performance in applications like image reconstruction tasks. These data-driven reconstruction methods often follow a two-step scheme. First, one trains the often neural network-based reconstruction scheme via a dataset. Second, one applies the scheme to new measurements to obtain reconstructions. We follow these steps but parameterize the reconstruction scheme with invertible residual networks (iResNets). We demonstrate that the invertibility enables investigating the influence of the training and architecture choices on the resulting reconstruction scheme. For example, assuming local approximation properties of the network, we show that these schemes become convergent regularizations. In addition, the investigations reveal a formal link to the linear regularization theory of linear inverse problems and provide a nonlinear spectral regularization for particular architecture classes. On the numerical side, we investigate the local approximation property of selected trained architectures and present a series of experiments on the MNIST dataset that underpin and extend our theoretical findings.


Full work available at URL: https://arxiv.org/abs/2306.01335







Cites Work


Cited In (4)





This page was built for publication: Invertible residual networks in the context of regularization theory for linear inverse problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6141568)