Regularising inverse problems with generative machine learning models

From MaRDI portal
Publication:6154435

DOI10.1007/S10851-023-01162-XarXiv2107.11191OpenAlexW3183492725MaRDI QIDQ6154435FDOQ6154435

Matthias J. Ehrhardt, Neill D. F. Campbell, M. A. G. Duff

Publication date: 15 February 2024

Published in: Journal of Mathematical Imaging and Vision (Search for Journal in Brave)

Abstract: Deep neural network approaches to inverse imaging problems have produced impressive results in the last few years. In this paper, we consider the use of generative models in a variational regularisation approach to inverse problems. The considered regularisers penalise images that are far from the range of a generative model that has learned to produce images similar to a training dataset. We name this family extit{generative regularisers}. The success of generative regularisers depends on the quality of the generative model and so we propose a set of desired criteria to assess generative models and guide future research. In our numerical experiments, we evaluate three common generative models, autoencoders, variational autoencoders and generative adversarial networks, against our desired criteria. We also test three different generative regularisers on the inverse problems of deblurring, deconvolution, and tomography. We show that restricting solutions of the inverse problem to lie exactly in the range of a generative model can give good results but that allowing small deviations from the range of the generator produces more consistent results.


Full work available at URL: https://arxiv.org/abs/2107.11191







Cites Work


Cited In (3)





This page was built for publication: Regularising inverse problems with generative machine learning models

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6154435)