The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
From MaRDI portal
(Redirected from Publication:1983602)
Abstract: We study an adaptive estimation procedure called the Goldenshluger-Lepski method in the context of reproducing kernel Hilbert space (RKHS) regression. Adaptive estimation provides a way of selecting tuning parameters for statistical estimators using only the available data. This allows us to perform estimation without making strong assumptions about the estimand. In contrast to procedures such as training and validation, the Goldenshluger-Lepski method uses all of the data to produce non-adaptive estimators for a range of values of the tuning parameters. An adaptive estimator is selected by performing pairwise comparisons between these non-adaptive estimators. Applying the Goldenshluger-Lepski method is non-trivial as it requires a simultaneous high-probability bound on all of the pairwise comparisons. In the RKHS regression context, we choose our non-adaptive estimators to be clipped least-squares estimators constrained to lie in a ball in an RKHS. Applying the Goldenshluger-Lepski method in this context is made more complicated by the fact that we cannot use the norm for performing the pairwise comparisons as it is unknown. We use the method to address two regression problems. In the first problem the RKHS is fixed, while in the second problem we adapt over a collection of RKHSs.
Recommendations
- Minimal penalty for Goldenshluger-Lepski method
- An alternative point of view on Lepski's method
- Ivanov-regularised least-squares estimators over large RKHSs and their interpolation spaces
- Adaptive estimation in the functional nonparametric regression model
- Adaptive estimation of linear functionals in functional linear models
Cites work
- scientific article; zbMATH DE number 4011018 (Why is no real title available?)
- scientific article; zbMATH DE number 3536702 (Why is no real title available?)
- scientific article; zbMATH DE number 1064642 (Why is no real title available?)
- A new concentration result for regularized risk minimizers
- Adaptive kernel methods using the balancing principle
- An alternative point of view on Lepski's method
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive Estimates
- Asymptotically Minimax Adaptive Estimation. II. Schemes without Optimal Adaptation: Adaptive Estimators
- Balancing principle in supervised learning for a general regularization scheme
- Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality
- Choosing multiple parameters for support vector machines
- Cross-validation based adaptation for regularization operators in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Gaussian model selection
- General selection rule from a family of linear estimators
- Ivanov-regularised least-squares estimators over large RKHSs and their interpolation spaces
- Localized algorithms for multiple kernel learning
- Mathematical foundations of infinite-dimensional statistical models
- Minimal penalties for Gaussian model selection
- Model selection for regression on a random design
- Nonlinear Tikhonov regularization in Hilbert scales with balancing principle tuning parameter in statistical inverse problems
- On a Problem of Adaptive Estimation in Gaussian White Noise
- On adaptive inverse estimation of linear functional in Hilbert scales
- Optimal discretization of inverse problems in Hilbert scales. Regularization and self-regulari\-za\-tion of projection methods
- Optimal regression rates for SVMs using Gaussian kernels
- Probability with Martingales
- Risk bounds for model selection via penalization
- SCALES OF BANACH SPACES
- Statistical Inverse Estimation in Hilbert Scales
- Structural adaptation via \(\mathbb L_p\)-norm oracle inequalities
- Support Vector Machines
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Universal pointwise selection rule in multivariate function estimation
- Weak convergence and empirical processes. With applications to statistics
Cited in
(3)
This page was built for publication: The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1983602)