Statistical behavior and consistency of classification methods based on convex risk minimization.

From MaRDI portal
Publication:1884603

DOI10.1214/aos/1079120130zbMath1105.62323OpenAlexW2023163512MaRDI QIDQ1884603

Tong Zhang

Publication date: 5 November 2004

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://projecteuclid.org/euclid.aos/1079120130




Related Items

Online regularized learning with pairwise loss functionsInfoGram and admissible machine learningConsistency and generalization bounds for maximum entropy density estimationGeneralization performance of Lagrangian support vector machine based on Markov samplingOn the Bayes-risk consistency of regularized boosting methods.V-shaped interval insensitive loss for ordinal classificationLearning from binary labels with instance-dependent noiseLocal Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)The new interpretation of support vector machines on statistical learning theory1-bit matrix completion: PAC-Bayesian analysis of a variational approximationERM learning algorithm for multi-class classificationFully online classification by regularizationA precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersClassifiers of support vector machine type with \(\ell_1\) complexity regularizationLearning rates of kernel-based robust classificationMulti-kernel regularized classifiersUniversally consistent vertex classification for latent positions graphsAveraging versus voting: a comparative study of strategies for distributed classificationPrincipled analytic classifier for positive-unlabeled learning via weighted integral probability metricA meta-cognitive learning algorithm for a fully complex-valued relaxation networkFast structured prediction using large margin sigmoid belief networksGoal scoring, coherent loss and applications to machine learningLearning rates for the kernel regularized regression with a differentiable strongly convex lossQuantitative convergence analysis of kernel based large-margin unified machinesStatistical performance of support vector machinesRanking and empirical minimization of \(U\)-statisticsLinear classifiers are nearly optimal when hidden variables have diverse effectsA review of boosting methods for imbalanced data classificationLikelihood-free inference via classificationRecursive aggregation of estimators by the mirror descent algorithm with averagingOracle properties of SCAD-penalized support vector machineMulticlass Boosting Algorithms for Shrinkage Estimators of Class ProbabilityCalibrated asymmetric surrogate lossesOracle inequalities for cross-validation type proceduresUpper bounds and aggregation in bipartite rankingLearning sparse gradients for variable selection and dimension reductionFurther results on the margin explanation of boosting: new algorithm and experimentsMirror averaging with sparsity priorsThe asymptotics of ranking algorithmsGeneralization Bounds for Some Ordinal Regression AlgorithmsRandom classification noise defeats all convex potential boostersLearning with mitigating random consistency from the accuracy measureGeneralization ability of fractional polynomial modelsOn the consistency of multi-label learningClassification with non-i.i.d. samplingDoes modeling lead to more accurate classification? A study of relative efficiency in linear classificationConvergence analysis of online algorithmsSimultaneous adaptation to the margin and to complexity in classificationCox process functional learningUnregularized online learning algorithms with general loss functionsOptimal rates of aggregation in classification under low noise assumptionSupport vector machines based on convex risk functions and general normsClassification with polynomial kernels and \(l^1\)-coefficient regularizationOn the rate of convergence for multi-category classification based on convex lossesParzen windows for multi-class classificationLearning rates for regularized classifiers using multivariate polynomial kernelsLearning from dependent observationsLearning the coordinate gradientsLearning rates for multi-kernel linear programming classifiersOn qualitative robustness of support vector machinesLearning from non-identical sampling for classificationRemembering Leo BreimanMulticategory vertex discriminant analysis for high-dimensional dataSurrogate losses in passive and active learningClassification with Gaussians and convex loss. II: Improving error bounds by noise conditionsStatistical inference of minimum BD estimators and classifiers for varying-dimensional modelsLearning errors of linear programming support vector regressionA note on margin-based loss functions in classificationNonparametric Modeling of Neural Point Processes via Stochastic Gradient Boosting RegressionApproximation with polynomial kernels and SVM classifiersSupervised Learning by Support Vector MachinesRademacher Chaos Complexities for Learning the Kernel ProblemDeformation of log-likelihood loss function for multiclass boostingFast rates for support vector machines using Gaussian kernelsGeneralized Hadamard fractional integral inequalities for strongly \((s,m)\)-convex functionsNew multicategory boosting algorithms based on multicategory Fisher-consistent lossesPAC-Bayesian bounds for randomized empirical risk minimizersRobust support vector machines based on the rescaled hinge loss functionEstimating treatment effect heterogeneity in randomized program evaluationRobustness of learning algorithms using hinge loss with outlier indicatorsTheory of Classification: a Survey of Some Recent AdvancesLearning rates of gradient descent algorithm for classificationAn Algorithm for Unconstrained Quadratically Penalized Convex OptimizationRobust learning from bites for data miningSVM-boosting based on Markov resampling: theory and algorithmOn surrogate loss functions and \(f\)-divergencesConvergence rates of generalization errors for margin-based classificationSVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDSEstimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functionsRisk-sensitive loss functions for sparse multi-category classification problemsOn boosting kernel regressionSparse classification: a scalable discrete optimization perspectiveThe asymptotic optimization of pre-edited ANN classifierOnline regularized generalized gradient classification algorithmsOn the properties of variational approximations of Gibbs posteriorsNonregular and minimax estimation of individualized thresholds in high dimension with binary responsesGeneralization performance of Gaussian kernels SVMC based on Markov samplingComplexities of convex combinations and bounding the generalization error in classificationBoosting with early stopping: convergence and consistencyProbability estimation with machine learning methods for dichotomous and multicategory outcome: TheoryLearning theory of minimum error entropy under weak moment conditionsDeep learning: a statistical viewpointA Statistical Learning Approach to Modal RegressionToward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and ApplicationsUnnamed ItemUnnamed ItemUnnamed ItemError Analysis of Coefficient-Based Regularized Algorithm for Density-Level DetectionReceiver operating characteristic curves and confidence bands for support vector machinesA nonlinear kernel SVM Classifier via \(L_{0/1}\) soft-margin loss with classification performanceFast convergence rates of deep neural networks for classificationA reduced-rank approach to predicting multiple binary responses through machine learningError analysis of classification learning algorithms based on LUMs lossProximal Activation of Smooth Functions in Splitting Algorithms for Convex Image RecoveryConsistency and convergence rate for nearest subspace classifierRefined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning ProblemA Note on Support Vector Machines with Polynomial KernelsRobust Support Vector Machines for Classification with Nonconvex and Smooth LossesLearning Theory Estimates with Observations from General Stationary Stochastic ProcessesGeneralization Analysis of Fredholm Kernel Regularized ClassifiersLearning Rates for Classification with Gaussian KernelsTESTING FOR RANDOM EFFECTS IN COMPOUND RISK MODELS VIA BREGMAN DIVERGENCEUnnamed ItemSurprising properties of dropout in deep networksSelection of Binary Variables and Classification by BoostingSVM Soft Margin Classifiers: Linear Programming versus Quadratic ProgrammingUnnamed ItemA fast unified algorithm for solving group-lasso penalize learning problemsCommentDiscussion of ``Hypothesis testing by convex optimizationOn Reject and Refine Options in Multicategory ClassificationComposite large margin classifiers with latent subclasses for heterogeneous biomedical dataAnother Look at Distance-Weighted DiscriminationConfidence sets with expected sizes for Multiclass ClassificationSparse additive machine with ramp lossPartial differential equation regularization for supervised machine learningUnnamed ItemOnline Classification with Varying GaussiansComparison theorems on large-margin learningOptimization by Gradient BoostingEstimating Individualized Treatment Rules Using Outcome Weighted Learning


Uses Software


Cites Work