Empirical margin distributions and bounding the generalization error of combined classifiers
From MaRDI portal
Publication:1848928
zbMath1012.62004arXivmath/0405343MaRDI QIDQ1848928
Dmitriy Panchenko, Vladimir I. Koltchinskii
Publication date: 14 November 2002
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0405343
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Order statistics; empirical distribution functions (62G30) Statistical aspects of information-theoretic topics (62B10) Neural nets and related approaches to inference from stochastic processes (62M45) Statistical distribution theory (62E99)
Related Items
Bounding the generalization error of convex combinations of classifiers: Balancing the dimensionality and the margins., Multi-category diagnostic accuracy based on logistic regression, Consistency and generalization bounds for maximum entropy density estimation, Complexity regularization via localized random penalties, Influence diagnostics in support vector machines, On the Bayes-risk consistency of regularized boosting methods., Optimal aggregation of classifiers in statistical learning., Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder), Noisy tensor completion via the sum-of-squares hierarchy, Complexity of pattern classes and the Lipschitz property, A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers, Classifiers of support vector machine type with \(\ell_1\) complexity regularization, \(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiers, Analysis of the generalization ability of a full decision tree, Joint leaf-refinement and ensemble pruning through \(L_1\) regularization, Tighter guarantees for the compressive multi-layer perceptron, Statistical performance of support vector machines, Ranking and empirical minimization of \(U\)-statistics, Robustness and generalization, Gradient descent on infinitely wide neural networks: global convergence and generalization, Generalization error bounds for the logical analysis of data, Further results on the margin explanation of boosting: new algorithm and experiments, Boosting algorithms: regularization, prediction and model fitting, Relative deviation learning bounds and generalization with unbounded loss functions, Transfer bounds for linear feature learning, Guaranteed Classification via Regularized Similarity Learning, Vote counting measures for ensemble classifiers., The value of agreement a new boosting algorithm, Concentration inequalities using the entropy method, Structure from Randomness in Halfspace Learning with the Zero-One Loss, Bootstrap -- an exploration, Robust multicategory support vector machines using difference convex algorithm, Structural multiple empirical kernel learning, A note on margin-based loss functions in classification, Concentration inequalities and asymptotic results for ratio type empirical processes, Unnamed Item, New multicategory boosting algorithms based on multicategory Fisher-consistent losses, Adaptive metric dimensionality reduction, MREKLM: a fast multiple empirical kernel learning machine, Theory of Classification: a Survey of Some Recent Advances, Boosting in the Presence of Outliers: Adaptive Classification With Nonconvex Loss Functions, Fast generalization error bound of deep learning without scale invariance of activation functions, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, A Vector-Contraction Inequality for Rademacher Complexities, Learning with Rejection, Structural Online Learning, Learning with Deep Cascades, Interpretable machine learning: fundamental principles and 10 grand challenges, A statistical learning perspective on switched linear system identification, Square root penalty: Adaption to the margin in classification and in edge estimation, AdaBoost and robust one-bit compressed sensing, Complexities of convex combinations and bounding the generalization error in classification, Boosting with early stopping: convergence and consistency, Learning in Repeated Auctions, Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions, Generalization bounds for metric and similarity learning