Complexities of convex combinations and bounding the generalization error in classification
DOI10.1214/009053605000000228zbMATH Open1080.62045arXivmath/0405356OpenAlexW1980526554WikidataQ105584231 ScholiaQ105584231MaRDI QIDQ2583410FDOQ2583410
Authors: Vladimir Koltchinskii, D. Panchenko
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0405356
Recommendations
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- On the generalization error of fixed combinations of classifiers
- scientific article
- Convexity, Classification, and Risk Bounds
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Optimal convex error estimators for classification
Nonparametric estimation (62G05) Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32) Strong limit theorems (60F15) Artificial intelligence (68T99)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Bagging predictors
- Learning Theory and Kernel Machines
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Boosting with early stopping: convergence and consistency
- Arcing classifiers. (With discussion)
- Title not available (Why is that?)
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Improved boosting algorithms using confidence-rated predictions
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Local Rademacher complexities
- Uniform Central Limit Theorems
- Some applications of concentration inequalities to statistics
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Convexity, Classification, and Risk Bounds
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Title not available (Why is that?)
- 10.1162/1532443041827925
- On the Bayes-risk consistency of regularized boosting methods.
- Title not available (Why is that?)
- Process consistency for AdaBoost.
- Aggregated estimators and empirical complexity for least square regression
- 10.1162/1532443041424319
- Symmetrization approach to concentration inequalities for empirical processes.
- A note on Talagrand's concentration inequality
- 10.1162/153244303768966111
- Some extensions of an inequality of Vapnik and Chervonenkis
- Title not available (Why is that?)
- Bounds on margin distributions in learning problems
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- Generalization bounds for voting classifiers based on sparsity and clustering.
Cited In (12)
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints
- Boosting with early stopping: convergence and consistency
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Further results on the margin explanation of boosting: new algorithm and experiments
- Generalization bounds for averaged classifiers
- Concentration estimates for learning with unbounded sampling
- Title not available (Why is that?)
- Analysis of boosting algorithms using the smooth margin function
- Generalization bounds for voting classifiers based on sparsity and clustering.
- Sparsity in penalized empirical risk minimization
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
This page was built for publication: Complexities of convex combinations and bounding the generalization error in classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583410)