Intrinsic dimension adaptive partitioning for kernel methods
DOI10.1137/21M1435690zbMATH Open1491.68154arXiv2107.07750OpenAlexW3185132568MaRDI QIDQ5089718FDOQ5089718
Authors: Thomas Hamm, Ingo Steinwart
Publication date: 15 July 2022
Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.07750
Recommendations
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Optimal regression rates for SVMs using Gaussian kernels
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
Learning and adaptive systems in artificial intelligence (68T05) Ridge regression; shrinkage estimators (Lasso) (62J07) Computational aspects of data analysis and big data (68T09)
Cites Work
- Support Vector Machines
- Optimal global rates of convergence for nonparametric regression
- Smooth discrimination analysis
- Title not available (Why is that?)
- Lectures on analysis on metric spaces
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Outer Minkowski content for some classes of closed sets
- A note on the dimensions of Assouad and Aikawa
- Optimal aggregation of classifiers in statistical learning.
- Bayesian manifold regression
- A tree-based regressor that adapts to intrinsic dimension
- Learning and approximation by Gaussians on Riemannian manifolds
- Fast learning rates for plug-in classifiers
- Fast rates for support vector machines using Gaussian kernels
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- Manifold regularization: a geometric framework for learning from labeled and unlabeled examples
- Geometric approximation algorithms
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Rates of convergence of nearest neighbor estimation under arbitrary sampling
- Minimax-optimal classification with dyadic decision trees
- Title not available (Why is that?)
- Optimal regression rates for SVMs using Gaussian kernels
- Title not available (Why is that?)
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression
- Improved classification rates under refined margin conditions
- Title not available (Why is that?)
- Title not available (Why is that?)
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Adaptive nonparametric regression with the K-nearest neighbour fused lasso
Cited In (3)
This page was built for publication: Intrinsic dimension adaptive partitioning for kernel methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5089718)