Intrinsic dimension adaptive partitioning for kernel methods
From MaRDI portal
Publication:5089718
DOI10.1137/21M1435690zbMATH Open1491.68154OpenAlexW3185132568MaRDI QIDQ5089718FDOQ5089718
Authors: Thomas Hamm, Ingo Steinwart
Publication date: 15 July 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Abstract: We prove minimax optimal learning rates for kernel ridge regression, resp.~support vector machines based on a data dependent partition of the input space, where the dependence of the dimension of the input space is replaced by the fractal dimension of the support of the data generating distribution. We further show that these optimal rates can be achieved by a training validation procedure without any prior knowledge on this intrinsic dimension of the data. Finally, we conduct extensive experiments which demonstrate that our considered learning methods are actually able to generalize from a dataset that is non-trivially embedded in a much higher dimensional space just as well as from the original dataset.
Full work available at URL: https://arxiv.org/abs/2107.07750
Recommendations
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Optimal regression rates for SVMs using Gaussian kernels
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07) Computational aspects of data analysis and big data (68T09)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A note on the dimensions of Assouad and Aikawa
- A tree-based regressor that adapts to intrinsic dimension
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Adaptive nonparametric regression with the \(K\)-nearest neighbour fused Lasso
- Bayesian manifold regression
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Fast learning rates for plug-in classifiers
- Fast rates for support vector machines using Gaussian kernels
- Geometric approximation algorithms
- Improved classification rates under refined margin conditions
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Learning and approximation by Gaussians on Riemannian manifolds
- Lectures on analysis on metric spaces
- Manifold regularization: a geometric framework for learning from labeled and unlabeled examples
- Minimax-optimal classification with dyadic decision trees
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Optimal aggregation of classifiers in statistical learning.
- Optimal global rates of convergence for nonparametric regression
- Optimal learning rates for localized SVMs
- Optimal regression rates for SVMs using Gaussian kernels
- Outer Minkowski content for some classes of closed sets
- Rates of convergence of nearest neighbor estimation under arbitrary sampling
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Smooth discrimination analysis
- Support Vector Machines
Cited In (4)
- Title not available (Why is that?)
- The effect of intrinsic dimension on the Bayes-error of projected quadratic discriminant classification
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
This page was built for publication: Intrinsic dimension adaptive partitioning for kernel methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5089718)