Fast rates for support vector machines using Gaussian kernels
From MaRDI portal
Abstract: For binary classification we establish learning rates up to the order of for support vector machines (SVMs) with hinge loss and Gaussian RBF kernels. These rates are in terms of two assumptions on the considered distributions: Tsybakov's noise assumption to establish a small estimation error, and a new geometric noise condition which is used to bound the approximation error. Unlike previously proposed concepts for bounding the approximation error, the geometric noise assumption does not employ any smoothness assumption.
Recommendations
Cites work
- scientific article; zbMATH DE number 4004880 (Why is no real title available?)
- scientific article; zbMATH DE number 3676637 (Why is no real title available?)
- scientific article; zbMATH DE number 44592 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 192914 (Why is no real title available?)
- scientific article; zbMATH DE number 3536702 (Why is no real title available?)
- scientific article; zbMATH DE number 3602126 (Why is no real title available?)
- scientific article; zbMATH DE number 3996455 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- scientific article; zbMATH DE number 3081828 (Why is no real title available?)
- 10.1162/153244303321897690
- 10.1162/1532443041827925
- A Bennett concentration inequality and its application to suprema of empirical processes
- About the constants in Talagrand's concentration inequalities for empirical processes.
- An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels
- An introduction to support vector machines and other kernel-based learning methods.
- Concentration inequalities for set-indexed empirical processes
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- Convexity, Classification, and Risk Bounds
- Convolution operators and L(p, q) spaces
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Improving the sample complexity using global data
- Left concentration inequalities for empirical processes
- Local Rademacher complexities
- Minimax nonparametric classification .I. Rates of convergence
- On the influence of the kernel on the consistency of support vector machines
- On the mathematical foundations of learning
- Optimal aggregation of classifiers in statistical learning.
- Optimal rates of convergence to Bayes risk in nonparametric discrimination
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Support vector machines are universally consistent
- Theory of Reproducing Kernels
- Weak convergence and empirical processes. With applications to statistics
Cited in
(only showing first 100 items - show all)- Sparse nonparametric regression with regularized tensor product kernel
- Sufficient dimension reduction via squared-loss mutual information estimation
- Regularization in kernel learning
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications
- Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets
- The new interpretation of support vector machines on statistical learning theory
- Divide-and-conquer for debiased \(l_1\)-norm support vector machine in ultra-high dimensions
- Optimal learning with Gaussians and correntropy loss
- On Robustness of Individualized Decision Rules
- Learning from dependent observations
- Learning rates of multi-kernel regularized regression
- Statistical performance of optimal scoring in reproducing kernel Hilbert spaces
- Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels
- Learning Theory
- Matched Learning for Optimizing Individualized Treatment Strategies Using Electronic Health Records
- Approximate duality
- Logistic classification with varying gaussians
- Kernel machines with missing covariates
- Feature elimination in kernel machines in moderately high dimensions
- Penalized empirical risk minimization over Besov spaces
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- D-learning to estimate optimal individual treatment rules
- Stability of unstable learning algorithms
- Relative deviation learning bounds and generalization with unbounded loss functions
- Learning noisy linear classifiers via adaptive and selective sampling
- Estimating individualized treatment rules using outcome weighted learning
- Online classification with varying Gaussians
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Learning rates for classification with Gaussian kernels
- Optimal rate for support vector machine regression with Markov chain samples
- A study on the error of distributed algorithms for big data classification with SVM
- Multicategory large margin classification methods: hinge losses vs. coherence functions
- Measuring the capacity of sets of functions in the analysis of ERM
- Robust multicategory support vector machines using difference convex algorithm
- Statistical consistency of coefficient-based conditional quantile regression
- Oracle properties of SCAD-penalized support vector machine
- Nonlinear approximation using Gaussian kernels
- Convergence rates of generalization errors for margin-based classification
- Approximation analysis of learning algorithms for support vector regression and quantile regression
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Relative Density-Ratio Estimation for Robust Distribution Comparison
- Large margin unified machines with non-i.i.d. process
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Support vector machines regression with unbounded sampling
- Fast learning rates for plug-in classifiers
- Analysis of regression algorithms with unbounded sampling
- Radial kernels and their reproducing kernel Hilbert spaces
- When can support vector machine achieve fast rates of convergence?
- The consistency of least-square regularized regression with negative association sequence
- Learning optimal distributionally robust individualized treatment rules
- Distributed regularized least squares with flexible Gaussian kernels
- Consistency of learning algorithms using Attouch-Wets convergence
- Analysis of regularized least-squares in reproducing kernel Kreĭn spaces
- Fast Gaussian kernel learning for classification tasks based on specially structured global optimization
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Learning theory approach to a system identification problem involving atomic norm
- Conditional probability estimation based classification with class label missing at random
- Contrast weighted learning for robust optimal treatment rule estimation
- scientific article; zbMATH DE number 7370542 (Why is no real title available?)
- Domain adaptation -- can quantity compensate for quality?
- Unregularized online algorithms with varying Gaussians
- Consistency of support vector machines using additive kernels for additive models
- Consistency and convergence rate for nearest subspace classifier
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping
- Learning rate of support vector machine for ranking
- Learning rates of gradient descent algorithm for classification
- Comparison theorems on large-margin learning
- Learning with convex loss and indefinite kernels
- Simultaneous estimations of optimal directions and optimal transformations for functional data
- Sparse kernel regression with coefficient-based \(\ell_q\)-regularization
- Asymptotic normality of support vector machine variants and other regularized kernel methods
- Large‐margin classification with multiple decision rules
- A statistical learning assessment of Huber regression
- scientific article; zbMATH DE number 7306879 (Why is no real title available?)
- Rademacher Chaos Complexities for Learning the Kernel Problem
- Optimal exponential bounds on the accuracy of classification
- The statistical rate for support matrix machines under low rankness and row (column) sparsity
- Support vector machine in big data: smoothing strategy and adaptive distributed inference
- A note on support vector machines with polynomial kernels
- Fast convergence rates of deep neural networks for classification
- Learning by atomic norm regularization with polynomial kernels
- Oracle inequalities for support vector machines that are based on random entropy numbers
- Density-difference estimation
- Targeted Local Support Vector Machine for Age-Dependent Classification
- Fast cross-validation algorithms for least squares support vector machine and kernel ridge regression
- \(\ell^{1}\)-norm support vector machine for ranking with exponentially strongly mixing sequence
- Controlling Cumulative Adverse Risk in Learning Optimal Dynamic Treatment Regimens
- Learning rates of least-square regularized regression with polynomial kernels
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Fairness-Oriented Learning for Optimal Individualized Treatment Rules
- Learning individualized treatment rules for multiple-domain latent outcomes
- Intrinsic dimension adaptive partitioning for kernel methods
- Augmented direct learning for conditional average treatment effect estimation with double robustness
- Classification with minimax fast rates for classes of Bayes rules with sparse representation
- Adaptive learning rates for support vector machines working on data with low intrinsic dimension
- Simultaneous adaptation to the margin and to complexity in classification
- Refined Rademacher chaos complexity bounds with applications to the multikernel learning problem
- Optimal regression rates for SVMs using Gaussian kernels
- An oracle inequality for regularized risk minimizers with strongly mixing observations
- Statistical performance of support vector machines
This page was built for publication: Fast rates for support vector machines using Gaussian kernels
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q995417)