Intrinsic dimension adaptive partitioning for kernel methods
From MaRDI portal
Publication:5089718
Abstract: We prove minimax optimal learning rates for kernel ridge regression, resp.~support vector machines based on a data dependent partition of the input space, where the dependence of the dimension of the input space is replaced by the fractal dimension of the support of the data generating distribution. We further show that these optimal rates can be achieved by a training validation procedure without any prior knowledge on this intrinsic dimension of the data. Finally, we conduct extensive experiments which demonstrate that our considered learning methods are actually able to generalize from a dataset that is non-trivially embedded in a much higher dimensional space just as well as from the original dataset.
Recommendations
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Optimal regression rates for SVMs using Gaussian kernels
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
Cites work
- scientific article; zbMATH DE number 7306873 (Why is no real title available?)
- scientific article; zbMATH DE number 5485566 (Why is no real title available?)
- scientific article; zbMATH DE number 6781365 (Why is no real title available?)
- scientific article; zbMATH DE number 3329342 (Why is no real title available?)
- A note on the dimensions of Assouad and Aikawa
- A tree-based regressor that adapts to intrinsic dimension
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Adaptive nonparametric regression with the \(K\)-nearest neighbour fused Lasso
- Bayesian manifold regression
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Fast learning rates for plug-in classifiers
- Fast rates for support vector machines using Gaussian kernels
- Geometric approximation algorithms
- Improved classification rates under refined margin conditions
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Learning and approximation by Gaussians on Riemannian manifolds
- Lectures on analysis on metric spaces
- Manifold regularization: a geometric framework for learning from labeled and unlabeled examples
- Minimax-optimal classification with dyadic decision trees
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Optimal aggregation of classifiers in statistical learning.
- Optimal global rates of convergence for nonparametric regression
- Optimal learning rates for localized SVMs
- Optimal regression rates for SVMs using Gaussian kernels
- Outer Minkowski content for some classes of closed sets
- Rates of convergence of nearest neighbor estimation under arbitrary sampling
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Smooth discrimination analysis
- Support Vector Machines
Cited in
(4)- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- scientific article; zbMATH DE number 2062634 (Why is no real title available?)
- The effect of intrinsic dimension on the Bayes-error of projected quadratic discriminant classification
This page was built for publication: Intrinsic dimension adaptive partitioning for kernel methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5089718)