Fast rates for support vector machines using Gaussian kernels
From MaRDI portal
Publication:995417
DOI10.1214/009053606000001226zbMath1127.68091arXiv0708.1838OpenAlexW2003585400WikidataQ59196406 ScholiaQ59196406MaRDI QIDQ995417
Publication date: 3 September 2007
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0708.1838
classificationsupport vector machineslearning ratesGaussian RBF kernelsnoise assumptionnonlinear discrimination
Asymptotic properties of nonparametric inference (62G20) Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10) Approximation by arbitrary nonlinear expressions; widths and entropy (41A46)
Related Items
Kernel machines with missing covariates, D-learning to estimate optimal individual treatment rules, Statistical consistency of coefficient-based conditional quantile regression, Learning by atomic norm regularization with polynomial kernels, The new interpretation of support vector machines on statistical learning theory, Regularization in kernel learning, Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Stability of unstable learning algorithms, Unnamed Item, Augmented direct learning for conditional average treatment effect estimation with double robustness, The consistency of least-square regularized regression with negative association sequence, ℓ1-Norm support vector machine for ranking with exponentially strongly mixing sequence, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, Sufficient Dimension Reduction via Squared-Loss Mutual Information Estimation, An oracle inequality for regularized risk minimizers with strongly mixing observations, Intrinsic Dimension Adaptive Partitioning for Kernel Methods, Radial kernels and their reproducing kernel Hilbert spaces, Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels, Learning from Non-iid Data: Fast Rates for the One-vs-All Multiclass Plug-in Classifiers, Fast Gaussian kernel learning for classification tasks based on specially structured global optimization, Consistency of learning algorithms using Attouch–Wets convergence, Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping, Fairness-Oriented Learning for Optimal Individualized Treatment Rules, On Robustness of Individualized Decision Rules, Fast convergence rates of deep neural networks for classification, Learning theory approach to a system identification problem involving atomic norm, Quantitative convergence analysis of kernel based large-margin unified machines, Approximation analysis of learning algorithms for support vector regression and quantile regression, Unnamed Item, Learning noisy linear classifiers via adaptive and selective sampling, Statistical performance of support vector machines, Consistency and convergence rate for nearest subspace classifier, Consistency of support vector machines using additive kernels for additive models, Oracle properties of SCAD-penalized support vector machine, Optimal regression rates for SVMs using Gaussian kernels, Classification with minimax fast rates for classes of Bayes rules with sparse representation, Penalized empirical risk minimization over Besov spaces, A STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVM, Relative deviation learning bounds and generalization with unbounded loss functions, Density-Difference Estimation, Learning with Convex Loss and Indefinite Kernels, Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem, Support vector machines regression with unbounded sampling, Quantile regression with \(\ell_1\)-regularization and Gaussian kernels, A Note on Support Vector Machines with Polynomial Kernels, Learning Rates for Classification with Gaussian Kernels, Classification with non-i.i.d. sampling, Learning rate of support vector machine for ranking, Simultaneous adaptation to the margin and to complexity in classification, Feature elimination in kernel machines in moderately high dimensions, Optimal exponential bounds on the accuracy of classification, Matched Learning for Optimizing Individualized Treatment Strategies Using Electronic Health Records, Optimal rates of aggregation in classification under low noise assumption, Statistical performance of optimal scoring in reproducing kernel Hilbert spaces, Robust multicategory support vector machines using difference convex algorithm, Domain adaptation -- can quantity compensate for quality?, Learning from dependent observations, Logistic classification with varying gaussians, Approximate duality, Learning from non-identical sampling for classification, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Learning rates of multi-kernel regularized regression, Covering numbers of Gaussian reproducing kernel Hilbert spaces, Nonlinear approximation using Gaussian kernels, Unnamed Item, Rademacher Chaos Complexities for Learning the Kernel Problem, Simultaneous estimations of optimal directions and optimal transformations for functional data, Fast learning rates for plug-in classifiers, Multicategory large margin classification methods: hinge losses vs. coherence functions, Unregularized online algorithms with varying Gaussians, Distributed regularized least squares with flexible Gaussian kernels, Oracle inequalities for support vector machines that are based on random entropy numbers, Measuring the Capacity of Sets of Functions in the Analysis of ERM, Conditional probability estimation based classification with class label missing at random, On Reject and Refine Options in Multicategory Classification, Learning rates of gradient descent algorithm for classification, Analysis of regularized least-squares in reproducing kernel Kreĭn spaces, Large‐margin classification with multiple decision rules, A statistical learning assessment of Huber regression, Analysis of Regression Algorithms with Unbounded Sampling, Relative Density-Ratio Estimation for Robust Distribution Comparison, Targeted Local Support Vector Machine for Age-Dependent Classification, Convergence rates of generalization errors for margin-based classification, Asymptotic normality of support vector machine variants and other regularized kernel methods, Regularized ranking with convex losses and \(\ell^1\)-penalty, Adaptive learning rates for support vector machines working on data with low intrinsic dimension, Learning rates of least-square regularized regression with polynomial kernels, Learning Optimal Distributionally Robust Individualized Treatment Rules, Unnamed Item, Optimal rate for support vector machine regression with Markov chain samples, Online Classification with Varying Gaussians, Optimal learning with Gaussians and correntropy loss, Learning Individualized Treatment Rules for Multiple-Domain Latent Outcomes, Generalization performance of Gaussian kernels SVMC based on Markov sampling, Comparison theorems on large-margin learning, Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory, Estimating Individualized Treatment Rules Using Outcome Weighted Learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimal rates of convergence to Bayes risk in nonparametric discrimination
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- Left concentration inequalities for empirical processes
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Support vector machines are universally consistent
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Weak convergence and empirical processes. With applications to statistics
- Convolution operators and L(p, q) spaces
- Local Rademacher complexities
- On the mathematical foundations of learning
- 10.1162/153244302760185252
- Consistency of Support Vector Machines and Other Regularized Kernel Classifiers
- An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Minimax nonparametric classification .I. Rates of convergence
- Improving the sample complexity using global data
- 10.1162/153244303321897690
- 10.1162/1532443041827925
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
- Concentration inequalities for set-indexed empirical processes