10.1162/1532443041424319

From MaRDI portal
Revision as of 01:58, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4823525

DOI10.1162/1532443041424319zbMath1083.68109OpenAlexW4245623693MaRDI QIDQ4823525

Gilles Blanchard, Nicolas Vayatis, Gábor Lugosi

Publication date: 28 October 2004

Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)

Full work available at URL: http://jmlr.csail.mit.edu/papers/v4/blanchard03a.html





Related Items (45)

Learning performance of regularized moving least square regressionLocal Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersClassifiers of support vector machine type with \(\ell_1\) complexity regularizationLearning with sample dependent hypothesis spacesMulti-kernel regularized classifiersConvergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesicsInfinitesimal gradient boostingUnnamed ItemAccelerated gradient boostingStatistical performance of support vector machinesRanking and empirical minimization of \(U\)-statisticsConsistency and convergence rate for nearest subspace classifierCalibrated asymmetric surrogate lossesClassification with minimax fast rates for classes of Bayes rules with sparse representationPenalized empirical risk minimization over Besov spacesSurvival ensembles by the sum of pairwise differences with application to lung cancer microarray studiesBoosting algorithms: regularization, prediction and model fittingMargin-adaptive model selection in statistical learningFast learning from \(\alpha\)-mixing observationsLearning with Convex Loss and Indefinite KernelsBoosting simple learnersLearning Theory Estimates with Observations from General Stationary Stochastic ProcessesLearning rate of support vector machine for rankingConvergence analysis of online algorithmsSimultaneous adaptation to the margin and to complexity in classificationOptimal exponential bounds on the accuracy of classificationCox process functional learningOptimal rates of aggregation in classification under low noise assumptionOn the rate of convergence for multi-category classification based on convex lossesBoosting and instability for regression treesSurrogate losses in passive and active learningDeformation of log-likelihood loss function for multiclass boostingFast learning rates for plug-in classifiersNew multicategory boosting algorithms based on multicategory Fisher-consistent lossesA large-sample theory for infinitesimal gradient boostingTheory of Classification: a Survey of Some Recent AdvancesMoving quantile regressionOn the Optimality of Sample-Based Estimates of the Expectation of the Empirical MinimizerFAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTIONBoosted nonparametric hazards with time-dependent covariatesSquare root penalty: Adaption to the margin in classification and in edge estimationComplexities of convex combinations and bounding the generalization error in classificationBoosting with early stopping: convergence and consistencyOptimization by Gradient Boosting







This page was built for publication: 10.1162/1532443041424319