Statistical performance of support vector machines
Publication:2426613
DOI10.1214/009053607000000839zbMath1133.62044arXiv0804.0551OpenAlexW1983563203MaRDI QIDQ2426613
Pascal Massart, Gilles Blanchard, Olivier Bousquet
Publication date: 23 April 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0804.0551
Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Neural nets and related approaches to inference from stochastic processes (62M45) Applications of operator theory in probability theory and statistics (47N30)
Related Items (52)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk bounds for statistical learning
- Fast rates for support vector machines using Gaussian kernels
- A Bennett concentration inequality and its application to suprema of empirical processes
- Empirical margin distributions and bounding the generalization error of combined classifiers
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Support vector machines are universally consistent
- Complexity regularization via localized random penalties
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Regularization networks and support vector machines
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Simultaneous adaptation to the margin and to complexity in classification
- Empirical minimization
- Square root penalty: Adaption to the margin in classification and in edge estimation
- Local Rademacher complexities
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- On the Eigenspectrum of the Gram Matrix and the Generalization Error of Kernel-PCA
- Capacity of reproducing kernel spaces in learning theory
- A new concentration result for regularized risk minimizers
- Scale-sensitive dimensions, uniform convergence, and learnability
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Minimax nonparametric classification .I. Rates of convergence
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Learning Theory
- 10.1162/153244302760200704
- 10.1162/1532443041424337
- 10.1162/1532443041424319
- 10.1162/153244303321897690
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convexity, Classification, and Risk Bounds
- Some applications of concentration inequalities to statistics
This page was built for publication: Statistical performance of support vector machines