Optimal rates for regularization of statistical inverse learning problems

From MaRDI portal
Publication:667648

DOI10.1007/S10208-017-9359-7zbMATH Open1412.62042arXiv1604.04054OpenAlexW2963053844MaRDI QIDQ667648FDOQ667648


Authors: Gilles Blanchard, Nicole Mücke Edit this on Wikidata


Publication date: 1 March 2019

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Abstract: We consider a statistical inverse learning problem, where we observe the image of a function f through a linear operator A at i.i.d. random design points Xi, superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of Af) and the inverse (estimation of f) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations n grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in n but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.


Full work available at URL: https://arxiv.org/abs/1604.04054




Recommendations




Cites Work


Cited In (52)





This page was built for publication: Optimal rates for regularization of statistical inverse learning problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q667648)