The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
DOI10.3150/20-BEJ1307zbMATH Open1473.62109arXiv1811.01061OpenAlexW3193613679MaRDI QIDQ1983602FDOQ1983602
Steffen Grünewälder, Stephen Page
Publication date: 10 September 2021
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.01061
Recommendations
- Minimal penalty for Goldenshluger-Lepski method
- An alternative point of view on Lepski's method
- Ivanov-regularised least-squares estimators over large RKHSs and their interpolation spaces
- Adaptive estimation in the functional nonparametric regression model
- Adaptive estimation of linear functionals in functional linear models
Nonparametric estimation (62G05) Density estimation (62G07) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Support Vector Machines
- Gaussian model selection
- Risk bounds for model selection via penalization
- Structural adaptation via \(\mathbb L_p\)-norm oracle inequalities
- Asymptotically Minimax Adaptive Estimation. I: Upper Bounds. Optimally Adaptive Estimates
- Title not available (Why is that?)
- Minimal penalties for Gaussian model selection
- Probability with Martingales
- Title not available (Why is that?)
- Localized algorithms for multiple kernel learning
- Bandwidth selection in kernel density estimation: oracle inequalities and adaptive minimax optimality
- Choosing multiple parameters for support vector machines
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- Model selection for regression on a random design
- Universal pointwise selection rule in multivariate function estimation
- General selection rule from a family of linear estimators
- Adaptive kernel methods using the balancing principle
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- A new concentration result for regularized risk minimizers
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Statistical Inverse Estimation in Hilbert Scales
- SCALES OF BANACH SPACES
- Title not available (Why is that?)
- On a Problem of Adaptive Estimation in Gaussian White Noise
- An alternative point of view on Lepski's method
- Optimal discretization of inverse problems in Hilbert scales. Regularization and self-regulari\-za\-tion of projection methods
- Optimal regression rates for SVMs using Gaussian kernels
- On adaptive inverse estimation of linear functional in Hilbert scales
- Asymptotically Minimax Adaptive Estimation. II. Schemes without Optimal Adaptation: Adaptive Estimators
- Balancing principle in supervised learning for a general regularization scheme
- Analysis of regularized Nyström subsampling for regression functions of low smoothness
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Nonlinear Tikhonov regularization in Hilbert scales with balancing principle tuning parameter in statistical inverse problems
- Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces
Cited In (1)
This page was built for publication: The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1983602)