Empirical risk minimization as parameter choice rule for general linear regularization methods

From MaRDI portal
Publication:2179243

DOI10.1214/19-AIHP966zbMATH Open1439.62096arXiv1703.07809OpenAlexW3004886965MaRDI QIDQ2179243FDOQ2179243


Authors: Yanyan Li Edit this on Wikidata


Publication date: 12 May 2020

Published in: Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)

Abstract: We consider the statistical inverse problem to recover f from noisy measurements Y=Tf+sigmaxi where xi is Gaussian white noise and T a compact operator between Hilbert spaces. Considering general reconstruction methods of the form hatfalpha=qalphaleft(T*Tight)T*Y with an ordered filter qalpha, we investigate the choice of the regularization parameter alpha by minimizing an unbiased estimate of the predictive risk mathbbEleft[VertTfThatfalphaVert2ight]. The corresponding parameter alphamathrmpred and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk mathbbEleft[VertfhatfalphamathrmpredVert2ight] with the oracle prediction risk infalpha>0mathbbEleft[VertTfThatfalphaVert2ight]. From this oracle inequality we are then able to conclude that the investigated parameter choice rule is of optimal order. Finally we also present numerical simulations, which support the order optimality of the method and the quality of the parameter choice in finite sample situations.


Full work available at URL: https://arxiv.org/abs/1703.07809




Recommendations




Cites Work


Cited In (12)





This page was built for publication: Empirical risk minimization as parameter choice rule for general linear regularization methods

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2179243)