Complexities of convex combinations and bounding the generalization error in classification
From MaRDI portal
Publication:2583410
Nonparametric estimation (62G05) Asymptotic properties of nonparametric inference (62G20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32) Strong limit theorems (60F15) Artificial intelligence (68T99)
Abstract: We introduce and study several measures of complexity of functions from the convex hull of a given base class. These complexity measures take into account the sparsity of the weights of a convex combination as well as certain clustering properties of the base functions involved in it. We prove new upper confidence bounds on the generalization error of ensemble (voting) classification algorithms that utilize the new complexity measures along with the empirical distributions of classification margins, providing a better explanation of generalization performance of large margin classification methods.
Recommendations
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- On the generalization error of fixed combinations of classifiers
- scientific article; zbMATH DE number 5957262
- Convexity, Classification, and Risk Bounds
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Optimal convex error estimators for classification
Cites work
- scientific article; zbMATH DE number 2089354 (Why is no real title available?)
- scientific article; zbMATH DE number 3772326 (Why is no real title available?)
- scientific article; zbMATH DE number 1552503 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- 10.1162/153244303768966111
- 10.1162/1532443041424319
- 10.1162/1532443041827925
- A note on Talagrand's concentration inequality
- Aggregated estimators and empirical complexity for least square regression
- Arcing classifiers. (With discussion)
- Bagging predictors
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Boosting with early stopping: convergence and consistency
- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- Bounds on margin distributions in learning problems
- Convexity, Classification, and Risk Bounds
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Generalization bounds for voting classifiers based on sparsity and clustering.
- Improved boosting algorithms using confidence-rated predictions
- Learning Theory and Kernel Machines
- Local Rademacher complexities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- On the Bayes-risk consistency of regularized boosting methods.
- Process consistency for AdaBoost.
- Some applications of concentration inequalities to statistics
- Some extensions of an inequality of Vapnik and Chervonenkis
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Symmetrization approach to concentration inequalities for empirical processes.
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- Uniform Central Limit Theorems
- Weak convergence and empirical processes. With applications to statistics
Cited in
(12)- Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins.
- Boosting with early stopping: convergence and consistency
- Concentration estimates for learning with unbounded sampling
- Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints
- scientific article; zbMATH DE number 5957262 (Why is no real title available?)
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers
- Generalization bounds for voting classifiers based on sparsity and clustering.
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Analysis of boosting algorithms using the smooth margin function
- Sparsity in penalized empirical risk minimization
- Further results on the margin explanation of boosting: new algorithm and experiments
- Generalization bounds for averaged classifiers
This page was built for publication: Complexities of convex combinations and bounding the generalization error in classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583410)