Statistical behavior and consistency of classification methods based on convex risk minimization.

From MaRDI portal
Revision as of 12:01, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1884603

DOI10.1214/AOS/1079120130zbMath1105.62323OpenAlexW2023163512MaRDI QIDQ1884603

Tong Zhang

Publication date: 5 November 2004

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://projecteuclid.org/euclid.aos/1079120130




Related Items (only showing first 100 items - show all)

Online regularized learning with pairwise loss functionsInfoGram and admissible machine learningConsistency and generalization bounds for maximum entropy density estimationGeneralization performance of Lagrangian support vector machine based on Markov samplingOn the Bayes-risk consistency of regularized boosting methods.V-shaped interval insensitive loss for ordinal classificationLearning from binary labels with instance-dependent noiseLocal Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)The new interpretation of support vector machines on statistical learning theory1-bit matrix completion: PAC-Bayesian analysis of a variational approximationERM learning algorithm for multi-class classificationFully online classification by regularizationA precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersClassifiers of support vector machine type with \(\ell_1\) complexity regularizationLearning rates of kernel-based robust classificationMulti-kernel regularized classifiersUniversally consistent vertex classification for latent positions graphsAveraging versus voting: a comparative study of strategies for distributed classificationPrincipled analytic classifier for positive-unlabeled learning via weighted integral probability metricA meta-cognitive learning algorithm for a fully complex-valued relaxation networkFast structured prediction using large margin sigmoid belief networksGoal scoring, coherent loss and applications to machine learningLearning rates for the kernel regularized regression with a differentiable strongly convex lossQuantitative convergence analysis of kernel based large-margin unified machinesStatistical performance of support vector machinesRanking and empirical minimization of \(U\)-statisticsLinear classifiers are nearly optimal when hidden variables have diverse effectsA review of boosting methods for imbalanced data classificationLikelihood-free inference via classificationRecursive aggregation of estimators by the mirror descent algorithm with averagingOracle properties of SCAD-penalized support vector machineMulticlass Boosting Algorithms for Shrinkage Estimators of Class ProbabilityCalibrated asymmetric surrogate lossesOracle inequalities for cross-validation type proceduresUpper bounds and aggregation in bipartite rankingLearning sparse gradients for variable selection and dimension reductionFurther results on the margin explanation of boosting: new algorithm and experimentsMirror averaging with sparsity priorsThe asymptotics of ranking algorithmsGeneralization Bounds for Some Ordinal Regression AlgorithmsRandom classification noise defeats all convex potential boostersLearning with mitigating random consistency from the accuracy measureGeneralization ability of fractional polynomial modelsOn the consistency of multi-label learningClassification with non-i.i.d. samplingDoes modeling lead to more accurate classification? A study of relative efficiency in linear classificationConvergence analysis of online algorithmsSimultaneous adaptation to the margin and to complexity in classificationCox process functional learningUnregularized online learning algorithms with general loss functionsOptimal rates of aggregation in classification under low noise assumptionSupport vector machines based on convex risk functions and general normsClassification with polynomial kernels and \(l^1\)-coefficient regularizationOn the rate of convergence for multi-category classification based on convex lossesParzen windows for multi-class classificationLearning rates for regularized classifiers using multivariate polynomial kernelsLearning from dependent observationsLearning the coordinate gradientsLearning rates for multi-kernel linear programming classifiersOn qualitative robustness of support vector machinesLearning from non-identical sampling for classificationRemembering Leo BreimanMulticategory vertex discriminant analysis for high-dimensional dataSurrogate losses in passive and active learningClassification with Gaussians and convex loss. II: Improving error bounds by noise conditionsStatistical inference of minimum BD estimators and classifiers for varying-dimensional modelsLearning errors of linear programming support vector regressionA note on margin-based loss functions in classificationNonparametric Modeling of Neural Point Processes via Stochastic Gradient Boosting RegressionApproximation with polynomial kernels and SVM classifiersSupervised Learning by Support Vector MachinesRademacher Chaos Complexities for Learning the Kernel ProblemDeformation of log-likelihood loss function for multiclass boostingFast rates for support vector machines using Gaussian kernelsGeneralized Hadamard fractional integral inequalities for strongly \((s,m)\)-convex functionsNew multicategory boosting algorithms based on multicategory Fisher-consistent lossesPAC-Bayesian bounds for randomized empirical risk minimizersRobust support vector machines based on the rescaled hinge loss functionEstimating treatment effect heterogeneity in randomized program evaluationRobustness of learning algorithms using hinge loss with outlier indicatorsTheory of Classification: a Survey of Some Recent AdvancesLearning rates of gradient descent algorithm for classificationAn Algorithm for Unconstrained Quadratically Penalized Convex OptimizationRobust learning from bites for data miningSVM-boosting based on Markov resampling: theory and algorithmOn surrogate loss functions and \(f\)-divergencesConvergence rates of generalization errors for margin-based classificationSVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDSEstimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functionsRisk-sensitive loss functions for sparse multi-category classification problemsOn boosting kernel regressionSparse classification: a scalable discrete optimization perspectiveThe asymptotic optimization of pre-edited ANN classifierOnline regularized generalized gradient classification algorithmsOn the properties of variational approximations of Gibbs posteriorsNonregular and minimax estimation of individualized thresholds in high dimension with binary responsesGeneralization performance of Gaussian kernels SVMC based on Markov samplingComplexities of convex combinations and bounding the generalization error in classificationBoosting with early stopping: convergence and consistencyProbability estimation with machine learning methods for dichotomous and multicategory outcome: Theory


Uses Software



Cites Work




This page was built for publication: Statistical behavior and consistency of classification methods based on convex risk minimization.