The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs

From MaRDI portal
Publication:1983602

DOI10.3150/20-BEJ1307zbMATH Open1473.62109arXiv1811.01061OpenAlexW3193613679MaRDI QIDQ1983602FDOQ1983602

Steffen Grünewälder, Stephen Page

Publication date: 10 September 2021

Published in: Bernoulli (Search for Journal in Brave)

Abstract: We study an adaptive estimation procedure called the Goldenshluger-Lepski method in the context of reproducing kernel Hilbert space (RKHS) regression. Adaptive estimation provides a way of selecting tuning parameters for statistical estimators using only the available data. This allows us to perform estimation without making strong assumptions about the estimand. In contrast to procedures such as training and validation, the Goldenshluger-Lepski method uses all of the data to produce non-adaptive estimators for a range of values of the tuning parameters. An adaptive estimator is selected by performing pairwise comparisons between these non-adaptive estimators. Applying the Goldenshluger-Lepski method is non-trivial as it requires a simultaneous high-probability bound on all of the pairwise comparisons. In the RKHS regression context, we choose our non-adaptive estimators to be clipped least-squares estimators constrained to lie in a ball in an RKHS. Applying the Goldenshluger-Lepski method in this context is made more complicated by the fact that we cannot use the L2 norm for performing the pairwise comparisons as it is unknown. We use the method to address two regression problems. In the first problem the RKHS is fixed, while in the second problem we adapt over a collection of RKHSs.


Full work available at URL: https://arxiv.org/abs/1811.01061




Recommendations




Cites Work


Cited In (1)





This page was built for publication: The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1983602)