Regularising inverse problems with generative machine learning models
From MaRDI portal
Publication:6154435
Computational learning theory (68Q32) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08) Computing methodologies for image processing (68U10) Numerical methods for inverse problems for initial value and initial-boundary value problems involving PDEs (65M32)
Abstract: Deep neural network approaches to inverse imaging problems have produced impressive results in the last few years. In this paper, we consider the use of generative models in a variational regularisation approach to inverse problems. The considered regularisers penalise images that are far from the range of a generative model that has learned to produce images similar to a training dataset. We name this family extit{generative regularisers}. The success of generative regularisers depends on the quality of the generative model and so we propose a set of desired criteria to assess generative models and guide future research. In our numerical experiments, we evaluate three common generative models, autoencoders, variational autoencoders and generative adversarial networks, against our desired criteria. We also test three different generative regularisers on the inverse problems of deblurring, deconvolution, and tomography. We show that restricting solutions of the inverse problem to lie exactly in the range of a generative model can give good results but that allowing small deviations from the range of the generator produces more consistent results.
Recommendations
Cites work
- scientific article; zbMATH DE number 803211 (Why is no real title available?)
- $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
- A Generative Variational Model for Inverse Problems in Imaging
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A kernel two-sample test
- An Introduction to Variational Autoencoders
- Decoding by Linear Programming
- Deep learning
- Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk
- Inverse problems. Tikhonov theory and algorithms
- Mathematical image processing. Translated from the German
- Modern regularization methods for inverse problems
- NETT: solving inverse problems with deep neural networks
- Nonlinear total variation based noise removal algorithms
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Regularization by architecture: a deep prior approach for inverse problems
- Scikit-learn: machine learning in Python
- Solving Inverse Problems by Joint Posterior Maximization with Autoencoding Prior
- Solving inverse problems using data-driven models
- Stochastic seismic waveform inversion using generative adversarial networks as a geological prior
- The Little Engine that Could: Regularization by Denoising (RED)
Cited in
(6)- DRIP: deep regularizers for inverse problems
- Iteratively Refined Image Reconstruction with Learned Attentive Regularizers
- Normalizing flow regularization for photoacoustic tomography
- Neural-network-based regularization methods for inverse problems in imaging
- Learning from small data sets: patch-based regularizers in inverse problems for image reconstruction
- Robustness and exploration of variational and machine learning approaches to inverse problems: an overview
This page was built for publication: Regularising inverse problems with generative machine learning models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6154435)