Learning filter functions in regularisers by minimising quotients

From MaRDI portal
Publication:5864036

DOI10.1007/978-3-319-58771-4_41zbMATH Open1489.68218arXiv1704.00989OpenAlexW2605133707MaRDI QIDQ5864036FDOQ5864036


Authors: Martin Benning, Guy Gilboa, Joana Sarah Grah, Carola-Bibiane Schönlieb Edit this on Wikidata


Publication date: 3 June 2022

Published in: Lecture Notes in Computer Science (Search for Journal in Brave)

Abstract: Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones.


Full work available at URL: https://arxiv.org/abs/1704.00989




Recommendations





Cited In (4)





This page was built for publication: Learning filter functions in regularisers by minimising quotients

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5864036)