Statistical behavior and consistency of classification methods based on convex risk minimization.
From MaRDI portal
Publication:1884603
DOI10.1214/AOS/1079120130zbMath1105.62323OpenAlexW2023163512MaRDI QIDQ1884603
Publication date: 5 November 2004
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://projecteuclid.org/euclid.aos/1079120130
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Bayesian inference (62F15)
Related Items (only showing first 100 items - show all)
Online regularized learning with pairwise loss functions ⋮ InfoGram and admissible machine learning ⋮ Consistency and generalization bounds for maximum entropy density estimation ⋮ Generalization performance of Lagrangian support vector machine based on Markov sampling ⋮ On the Bayes-risk consistency of regularized boosting methods. ⋮ V-shaped interval insensitive loss for ordinal classification ⋮ Learning from binary labels with instance-dependent noise ⋮ Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder) ⋮ The new interpretation of support vector machines on statistical learning theory ⋮ 1-bit matrix completion: PAC-Bayesian analysis of a variational approximation ⋮ ERM learning algorithm for multi-class classification ⋮ Fully online classification by regularization ⋮ A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers ⋮ Classifiers of support vector machine type with \(\ell_1\) complexity regularization ⋮ Learning rates of kernel-based robust classification ⋮ Multi-kernel regularized classifiers ⋮ Universally consistent vertex classification for latent positions graphs ⋮ Averaging versus voting: a comparative study of strategies for distributed classification ⋮ Principled analytic classifier for positive-unlabeled learning via weighted integral probability metric ⋮ A meta-cognitive learning algorithm for a fully complex-valued relaxation network ⋮ Fast structured prediction using large margin sigmoid belief networks ⋮ Goal scoring, coherent loss and applications to machine learning ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Statistical performance of support vector machines ⋮ Ranking and empirical minimization of \(U\)-statistics ⋮ Linear classifiers are nearly optimal when hidden variables have diverse effects ⋮ A review of boosting methods for imbalanced data classification ⋮ Likelihood-free inference via classification ⋮ Recursive aggregation of estimators by the mirror descent algorithm with averaging ⋮ Oracle properties of SCAD-penalized support vector machine ⋮ Multiclass Boosting Algorithms for Shrinkage Estimators of Class Probability ⋮ Calibrated asymmetric surrogate losses ⋮ Oracle inequalities for cross-validation type procedures ⋮ Upper bounds and aggregation in bipartite ranking ⋮ Learning sparse gradients for variable selection and dimension reduction ⋮ Further results on the margin explanation of boosting: new algorithm and experiments ⋮ Mirror averaging with sparsity priors ⋮ The asymptotics of ranking algorithms ⋮ Generalization Bounds for Some Ordinal Regression Algorithms ⋮ Random classification noise defeats all convex potential boosters ⋮ Learning with mitigating random consistency from the accuracy measure ⋮ Generalization ability of fractional polynomial models ⋮ On the consistency of multi-label learning ⋮ Classification with non-i.i.d. sampling ⋮ Does modeling lead to more accurate classification? A study of relative efficiency in linear classification ⋮ Convergence analysis of online algorithms ⋮ Simultaneous adaptation to the margin and to complexity in classification ⋮ Cox process functional learning ⋮ Unregularized online learning algorithms with general loss functions ⋮ Optimal rates of aggregation in classification under low noise assumption ⋮ Support vector machines based on convex risk functions and general norms ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ On the rate of convergence for multi-category classification based on convex losses ⋮ Parzen windows for multi-class classification ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Learning from dependent observations ⋮ Learning the coordinate gradients ⋮ Learning rates for multi-kernel linear programming classifiers ⋮ On qualitative robustness of support vector machines ⋮ Learning from non-identical sampling for classification ⋮ Remembering Leo Breiman ⋮ Multicategory vertex discriminant analysis for high-dimensional data ⋮ Surrogate losses in passive and active learning ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Statistical inference of minimum BD estimators and classifiers for varying-dimensional models ⋮ Learning errors of linear programming support vector regression ⋮ A note on margin-based loss functions in classification ⋮ Nonparametric Modeling of Neural Point Processes via Stochastic Gradient Boosting Regression ⋮ Approximation with polynomial kernels and SVM classifiers ⋮ Supervised Learning by Support Vector Machines ⋮ Rademacher Chaos Complexities for Learning the Kernel Problem ⋮ Deformation of log-likelihood loss function for multiclass boosting ⋮ Fast rates for support vector machines using Gaussian kernels ⋮ Generalized Hadamard fractional integral inequalities for strongly \((s,m)\)-convex functions ⋮ New multicategory boosting algorithms based on multicategory Fisher-consistent losses ⋮ PAC-Bayesian bounds for randomized empirical risk minimizers ⋮ Robust support vector machines based on the rescaled hinge loss function ⋮ Estimating treatment effect heterogeneity in randomized program evaluation ⋮ Robustness of learning algorithms using hinge loss with outlier indicators ⋮ Theory of Classification: a Survey of Some Recent Advances ⋮ Learning rates of gradient descent algorithm for classification ⋮ An Algorithm for Unconstrained Quadratically Penalized Convex Optimization ⋮ Robust learning from bites for data mining ⋮ SVM-boosting based on Markov resampling: theory and algorithm ⋮ On surrogate loss functions and \(f\)-divergences ⋮ Convergence rates of generalization errors for margin-based classification ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions ⋮ Risk-sensitive loss functions for sparse multi-category classification problems ⋮ On boosting kernel regression ⋮ Sparse classification: a scalable discrete optimization perspective ⋮ The asymptotic optimization of pre-edited ANN classifier ⋮ Online regularized generalized gradient classification algorithms ⋮ On the properties of variational approximations of Gibbs posteriors ⋮ Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Complexities of convex combinations and bounding the generalization error in classification ⋮ Boosting with early stopping: convergence and consistency ⋮ Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory
Uses Software
Cites Work
- A decision-theoretic generalization of on-line learning and an application to boosting
- Arcing classifiers. (With discussion)
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Support vector machines are universally consistent
- On the Bayes-risk consistency of regularized boosting methods.
- Improved boosting algorithms using confidence-rated predictions
- Boosting With theL2Loss
- Convex Analysis
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Statistical behavior and consistency of classification methods based on convex risk minimization.