Distribution-free uncertainty quantification for kernel methods by gradient perturbations

From MaRDI portal
Publication:2320596

DOI10.1007/S10994-019-05822-1zbMATH Open1494.62012arXiv1812.09632OpenAlexW3100536890WikidataQ127610749 ScholiaQ127610749MaRDI QIDQ2320596FDOQ2320596

Krisztián B. Kis, Balázs Csanád Csáji

Publication date: 23 August 2019

Published in: Machine Learning (Search for Journal in Brave)

Abstract: We propose a data-driven approach to quantify the uncertainty of models constructed by kernel methods. Our approach minimizes the needed distributional assumptions, hence, instead of working with, for example, Gaussian processes or exponential families, it only requires knowledge about some mild regularity of the measurement noise, such as it is being symmetric or exchangeable. We show, by building on recent results from finite-sample system identification, that by perturbing the residuals in the gradient of the objective function, information can be extracted about the amount of uncertainty our model has. Particularly, we provide an algorithm to build exact, non-asymptotically guaranteed, distribution-free confidence regions for ideal, noise-free representations of the function we try to estimate. For the typical convex quadratic problems and symmetric noises, the regions are star convex centered around a given nominal estimate, and have efficient ellipsoidal outer approximations. Finally, we illustrate the ideas on typical kernel methods, such as LS-SVC, KRR, varepsilon-SVR and kernelized LASSO.


Full work available at URL: https://arxiv.org/abs/1812.09632




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Distribution-free uncertainty quantification for kernel methods by gradient perturbations

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2320596)