Fast learning rates for plug-in classifiers
From MaRDI portal
classificationstatistical learningminimax lower boundsfast rates of convergenceexcess riskplug-in classifiers
Density estimation (62G07) Nonparametric regression and quantile regression (62G08) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Characterization and structure theory for multivariate probability distributions; copulas (62H05) Pattern recognition, speech recognition (68T10)
Abstract: It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than . The work on this subject has suggested the following two conjectures: (i) the best achievable fast rate is of the order , and (ii) the plug-in classifiers generally converge more slowly than the classifiers based on empirical risk minimization. We show that both conjectures are not correct. In particular, we construct plug-in classifiers that can achieve not only fast, but also super-fast rates, that is, rates faster than . We establish minimax lower bounds showing that the obtained rates cannot be improved.
Recommendations
- On the Rate of Convergence of Local Averaging Plug-In Classification Rules Under a Margin Condition
- Learning from non-iid data: fast rates for the one-vs-all multiclass plug-in classifiers
- Fast classification rates without standard margin assumptions
- Optimal aggregation of classifiers in statistical learning.
- Minimax nonparametric classification .I. Rates of convergence
Cites work
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 3446442 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- scientific article; zbMATH DE number 1420699 (Why is no real title available?)
- scientific article; zbMATH DE number 3215519 (Why is no real title available?)
- 10.1162/1532443041424319
- A distribution-free theory of nonparametric regression
- An Asymptotically Minimax Regression Estimator in the Uniform Norm up to Exact Constant
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Convexity, Classification, and Risk Bounds
- Fast learning rates for plug-in classifiers
- Fast rates for support vector machines using Gaussian kernels
- Information-theoretic determination of minimax rates of convergence
- Introduction to nonparametric estimation
- Learning Theory
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Minimax nonparametric classification .I. Rates of convergence
- Optimal aggregation of classifiers in statistical learning.
- Optimal global rates of convergence for nonparametric regression
- PIECEWISE-POLYNOMIAL APPROXIMATIONS OF FUNCTIONS OF THE CLASSES $ W_{p}^{\alpha}$
- Rate of convergence of nonparametric estimates of maximum-likelihood type
- Risk bounds for statistical learning
- Smooth discrimination analysis
- Square root penalty: Adaption to the margin in classification and in edge estimation
- Statistical performance of support vector machines
- Theory of Classification: a Survey of Some Recent Advances
Cited in
(only showing first 100 items - show all)- Fast classification rates without standard margin assumptions
- Ranking data with ordinal labels: optimality and pairwise aggregation
- Classification via local multi-resolution projections
- Sharp instruments for classifying compliers and generalizing causal effects
- Learning from non-iid data: fast rates for the one-vs-all multiclass plug-in classifiers
- Optimal weighted nearest neighbour classifiers
- Multi-Armed Angle-Based Direct Learning for Estimating Optimal Individualized Treatment Rules With Various Outcomes
- scientific article; zbMATH DE number 7307471 (Why is no real title available?)
- Generalized density clustering
- Overlaying classifiers: A practical approach to optimal scoring
- A survey on Neyman-Pearson classification and suggestions for future research
- Quantum learning: asymptotically optimal classification of qubit states
- Stochastic continuum-armed bandits with additive models: minimax regrets and adaptive algorithm
- Towards convergence rate analysis of random forests for classification
- Deep spectral Q-learning with application to mobile health
- On the Rate of Convergence of Local Averaging Plug-In Classification Rules Under a Margin Condition
- Rejoinder: Optimal Individualized Decision Rules Using Instrumental Variable Methods
- Optimal Individualized Decision Rules Using Instrumental Variable Methods
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Penalized empirical risk minimization over Besov spaces
- Adaptive Algorithm for Multi-Armed Bandit Problem with High-Dimensional Covariates
- Convergence rates of deep ReLU networks for multiclass classification
- Self-supervised Metric Learning in Multi-View Data: A Downstream Task Perspective
- Choice of neighbor order in nearest-neighbor classification
- Fast learning rates in statistical inference through aggregation
- Performance guarantees for policy learning
- Selection of variables and dimension reduction in high-dimensional non-parametric regression
- Adaptive novelty detection with false discovery rate guarantee
- Statistically Efficient Advantage Learning for Offline Reinforcement Learning in Infinite Horizons
- Optimal classification and nonparametric regression for functional data
- Benign overfitting and adaptive nonparametric regression
- Semiparametric single-index models for optimal treatment regimens with censored outcomes
- The multi-armed bandit problem with covariates
- Adaptive transfer learning
- Improved classification rates under refined margin conditions
- Classification in postural style
- Local nearest neighbour classification with applications to semi-supervised learning
- Bandit and covariate processes, with finite or non-denumerable set of arms
- Optimal rates for nonparametric F-score binary classification via post-processing
- scientific article; zbMATH DE number 7049742 (Why is no real title available?)
- An error analysis for deep binary classification with sigmoid loss
- A theory of learning with corrupted labels
- Margin-adaptive model selection in statistical learning
- Optimal individualized treatments in resource-limited settings
- Convergence rates of generalization errors for margin-based classification
- Asymptotic theory of \(\ell_1\)-regularized PDE identification from a single noisy trajectory
- On regression and classification with possibly missing response variables in the data
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Deep neural network classifier for multidimensional functional data
- Fast convergence on perfect classification for functional data
- Distributed adaptive nearest neighbor classifier: algorithm and theory
- Fast learning rates for plug-in classifiers
- Nonparametric plug-in classifier for multiclass classification of S.D.E. paths
- Nonparametric classification with missing data
- Transfer learning for nonparametric classification: minimax rate and adaptive classifier
- Consistency of plug-in confidence sets for classification in semi-supervised learning
- PAC-Bayesian high dimensional bipartite ranking
- A kernel-type regression estimator for NMAR response variables with applications to classification
- Classification with minimum ambiguity under distribution heterogeneity
- Supervised classification for a family of Gaussian functional models
- Regret lower bound and optimal algorithm for high-dimensional contextual linear bandit
- Optimal subgroup selection
- TNN: a transfer learning classifier based on weighted nearest neighbors
- Noisy discriminant analysis with boundary assumptions
- New equivalences between interpolation and SVMs: kernels and structured features
- Upper bounds and aggregation in bipartite ranking
- Obtaining fast error rates in nonconvex situations
- Strongly universally consistent nonparametric regression and classification with privatised data
- Surrogate losses in passive and active learning
- Transfer learning for contextual multi-armed bandits
- scientific article; zbMATH DE number 7415094 (Why is no real title available?)
- Optimal convergence rates of deep neural networks in a classification setting
- Sensitivity Analysis via the Proportion of Unmeasured Confounding
- Classification Trees for Imbalanced Data: Surface-to-Volume Regularization
- Value Enhancement of Reinforcement Learning via Efficient and Robust Trust Region Optimization
- Robust empirical Bayes tests for continuous distributions
- Randomized allocation with arm elimination in a bandit problem with covariates
- scientific article; zbMATH DE number 7370641 (Why is no real title available?)
- Benefit of Interpolation in Nearest Neighbor Algorithms
- Fast rates in statistical and online learning
- Minimax semi-supervised set-valued approach to multi-class classification
- Least Ambiguous Set-Valued Classifiers With Bounded Error Levels
- Classification algorithms using adaptive partitioning
- Optimal exponential bounds on the accuracy of classification
- Dimension reduction-based adaptive-to-model semi-supervised classification
- Optimal functional supervised classification with separation condition
- Fast convergence rates of deep neural networks for classification
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Approximation of limit state surfaces in monotonic Monte Carlo settings, with applications to classification
- Rate of convergence of \(k\)-nearest-neighbor classification rule
- Posterior concentration and fast convergence rates for generalized Bayesian learning
- Noisy independent factor analysis model for density estimation and classification
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Inverse statistical learning
- Intrinsic dimension adaptive partitioning for kernel methods
- Confidence regions for level sets
- A Sparse Random Projection-Based Test for Overall Qualitative Treatment Effects
- Distribution-Free Prediction Sets
- Minimax fast rates for discriminant analysis with errors in variables
- Policy Learning with Asymmetric Counterfactual Utilities
This page was built for publication: Fast learning rates for plug-in classifiers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q995418)