Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
From MaRDI portal
Publication:1872344
DOI10.1214/aoap/1042765667zbMath1073.62535arXivmath/0405345OpenAlexW1993514889MaRDI QIDQ1872344
Fernando Lozano, Vladimir I. Koltchinskii, Dmitriy Panchenko
Publication date: 6 May 2003
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0405345
empirical processbaggingboostingconcentration inequalitiescombined classifiermarginRademacher processapproximate dimensionGeneralization errorrandom entropies
Asymptotic properties of nonparametric inference (62G20) Strong limit theorems (60F15) Nonparametric inference (62G99)
Related Items
Robust sub-Gaussian estimation of a mean vector in nearly linear time, Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), Complexities of convex combinations and bounding the generalization error in classification
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- A decision-theoretic generalization of on-line learning and an application to boosting
- Bounds on margin distributions in learning problems
- Improved generalization through explicit optimization of margins
- A geometric approach to leveraging weak learners
- Arcing classifiers. (With discussion)
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Empirical margin distributions and bounding the generalization error of combined classifiers
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Support-vector networks
- Weak convergence and empirical processes. With applications to statistics
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Rademacher penalties and structural risk minimization
- Neural Network Learning
- Model selection and error estimation
- New concentration inequalities in product spaces