10.1162/1532443041424319
From MaRDI portal
Publication:4823525
DOI10.1162/1532443041424319zbMath1083.68109OpenAlexW4245623693MaRDI QIDQ4823525
Gilles Blanchard, Nicolas Vayatis, Gábor Lugosi
Publication date: 28 October 2004
Published in: CrossRef Listing of Deleted DOIs (Search for Journal in Brave)
Full work available at URL: http://jmlr.csail.mit.edu/papers/v4/blanchard03a.html
Learning and adaptive systems in artificial intelligence (68T05) Pattern recognition, speech recognition (68T10)
Related Items (45)
Learning performance of regularized moving least square regression ⋮ Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder) ⋮ A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers ⋮ Classifiers of support vector machine type with \(\ell_1\) complexity regularization ⋮ Learning with sample dependent hypothesis spaces ⋮ Multi-kernel regularized classifiers ⋮ Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics ⋮ Infinitesimal gradient boosting ⋮ Unnamed Item ⋮ Accelerated gradient boosting ⋮ Statistical performance of support vector machines ⋮ Ranking and empirical minimization of \(U\)-statistics ⋮ Consistency and convergence rate for nearest subspace classifier ⋮ Calibrated asymmetric surrogate losses ⋮ Classification with minimax fast rates for classes of Bayes rules with sparse representation ⋮ Penalized empirical risk minimization over Besov spaces ⋮ Survival ensembles by the sum of pairwise differences with application to lung cancer microarray studies ⋮ Boosting algorithms: regularization, prediction and model fitting ⋮ Margin-adaptive model selection in statistical learning ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Boosting simple learners ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Learning rate of support vector machine for ranking ⋮ Convergence analysis of online algorithms ⋮ Simultaneous adaptation to the margin and to complexity in classification ⋮ Optimal exponential bounds on the accuracy of classification ⋮ Cox process functional learning ⋮ Optimal rates of aggregation in classification under low noise assumption ⋮ On the rate of convergence for multi-category classification based on convex losses ⋮ Boosting and instability for regression trees ⋮ Surrogate losses in passive and active learning ⋮ Deformation of log-likelihood loss function for multiclass boosting ⋮ Fast learning rates for plug-in classifiers ⋮ New multicategory boosting algorithms based on multicategory Fisher-consistent losses ⋮ A large-sample theory for infinitesimal gradient boosting ⋮ Theory of Classification: a Survey of Some Recent Advances ⋮ Moving quantile regression ⋮ On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer ⋮ FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION ⋮ Boosted nonparametric hazards with time-dependent covariates ⋮ Square root penalty: Adaption to the margin in classification and in edge estimation ⋮ Complexities of convex combinations and bounding the generalization error in classification ⋮ Boosting with early stopping: convergence and consistency ⋮ Optimization by Gradient Boosting
This page was built for publication: 10.1162/1532443041424319