Theory of Classification: a Survey of Some Recent Advances

From MaRDI portal
Publication:3373749

DOI10.1051/ps:2005018zbMath1136.62355OpenAlexW2014902932WikidataQ58374465 ScholiaQ58374465MaRDI QIDQ3373749

Olivier Bousquet, Stéphane Boucheron, Gábor Lugosi

Publication date: 9 March 2006

Published in: ESAIM: Probability and Statistics (Search for Journal in Brave)

Full work available at URL: http://www.numdam.org/item?id=PS_2005__9__323_0



Related Items

Sampling and empirical risk minimization, Neural network approximation, An empirical classification procedure for nonparametric mixture models, Classification in general finite dimensional spaces with the \(k\)-nearest neighbor rule, Classification with reject option, Efficiency of classification methods based on empirical risk minimization, PAC-Bayesian high dimensional bipartite ranking, Model selection by bootstrap penalization for classification, Guest editorial: Learning theory, On the kernel rule for function classification, Fast learning rates in statistical inference through aggregation, Stability and minimax optimality of tangential Delaunay complexes for manifold reconstruction, Localization of VC classes: beyond local Rademacher complexities, On regularization algorithms in learning theory, Multi-kernel regularized classifiers, How can we identify the sparsity structure pattern of high-dimensional data: an elementary statistical analysis to interpretable machine learning, Mathematical methods of randomized machine learning, Robust statistical learning with Lipschitz and convex loss functions, Optimal functional supervised classification with separation condition, Overlaying classifiers: A practical approach to optimal scoring, A partial overview of the theory of statistics with functional data, A statistical view of clustering performance through the theory of \(U\)-processes, Consistency of learning algorithms using Attouch–Wets convergence, Hold-out estimates of prediction models for Markov processes, Empirical risk minimization for heavy-tailed losses, SVRG meets AdaGrad: painless variance reduction, Unnamed Item, Adaptive partitioning schemes for bipartite ranking, Learning noisy linear classifiers via adaptive and selective sampling, Unnamed Item, Learning bounds for quantum circuits in the agnostic setting, Ranking and empirical minimization of \(U\)-statistics, Local nearest neighbour classification with applications to semi-supervised learning, Properties of convergence of a fuzzy set estimator of the density function, Robustness and generalization, Strong Lp convergence of wavelet deconvolution density estimators, Robust classification via MOM minimization, Risk bounds for CART classifiers under a margin condition, Upper bounds and aggregation in bipartite ranking, Concentration Inequalities for Samples without Replacement, Classification with minimax fast rates for classes of Bayes rules with sparse representation, Plugin procedure in segmentation and application to hyperspectral image segmentation, Kullback-Leibler aggregation and misspecified generalized linear models, Adaptive kernel methods using the balancing principle, Relative deviation learning bounds and generalization with unbounded loss functions, An empirical comparison of learning algorithms for nonparametric scoring: the \textsc{TreeRank} algorithm and other methods, Cover-based combinatorial bounds on probability of overfitting, Combinatorial bounds of overfitting for threshold classifiers, Kernel methods in machine learning, Structure from Randomness in Halfspace Learning with the Zero-One Loss, Obtaining fast error rates in nonconvex situations, Simultaneous adaptation to the margin and to complexity in classification, Classification algorithms using adaptive partitioning, Concentration inequalities for two-sample rank processes with application to bipartite ranking, Convergence conditions for the observed mean method in stochastic programming, Sample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMs, Optimal rates of aggregation in classification under low noise assumption, Unnamed Item, Learning by mirror averaging, Constructing processes with prescribed mixing coefficients, Reducing mechanism design to algorithm design via machine learning, Learning sets with separating kernels, Classification with many classes: challenges and pluses, A survey of cross-validation procedures for model selection, Maxisets for model selection, A high-dimensional Wilks phenomenon, Supervised Learning by Support Vector Machines, Testing piecewise functions, Fast learning rates for plug-in classifiers, Variance-based regularization with convex objectives, Sharpness estimation of combinatorial generalization ability bounds for threshold decision rules, Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory, Generalized mirror averaging and \(D\)-convex aggregation, PAC-Bayesian bounds for randomized empirical risk minimizers, Bandwidth selection in kernel empirical risk minimization via the gradient, Measuring the Capacity of Sets of Functions in the Analysis of ERM, Agnostic active learning, Bayesian approach, theory of empirical risk minimization. Comparative analysis, Optimal weighted nearest neighbour classifiers, Optimal survey schemes for stochastic gradient descent with applications to M-estimation, Robust \(k\)-means clustering for distributions with two moments, When are epsilon-nets small?, Unnamed Item, On signal representations within the Bayes decision framework, Adaptive Estimation of the Optimal ROC Curve and a Bipartite Ranking Algorithm, Permutational Rademacher Complexity, Instability, complexity, and evolution, Statistical analysis of Mapper for stochastic and multivariate filters, Percolation centrality via Rademacher Complexity, Unnamed Item, Minimax fast rates for discriminant analysis with errors in variables, Statistical active learning algorithms for noise tolerance and differential privacy, Statistical learning from biased training samples, Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability, Depth separations in neural networks: what is actually being separated?


Uses Software


Cites Work