Risk bounds for statistical learning
From MaRDI portal
model selectionClassificationpattern recognitionempirical processesconcentration inequalitiesregression estimationminimax estimationVC-dimensionentropy with bracketingstructural minimization of riskVC-class
Statistical aspects of information-theoretic topics (62B10) Bayesian inference (62F15) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Inequalities; stochastic orderings (60E15) Measures of information, entropy (94A17)
Abstract: We propose a general theorem providing upper bounds for the risk of an empirical risk minimizer (ERM).We essentially focus on the binary classification framework. We extend Tsybakov's analysis of the risk of an ERM under margin type conditions by using concentration inequalities for conveniently weighted empirical processes. This allows us to deal with ways of measuring the ``size of a class of classifiers other than entropy with bracketing as in Tsybakov's work. In particular, we derive new risk bounds for the ERM when the classification rules belong to some VC-class under margin conditions and discuss the optimality of these bounds in a minimax sense.
Recommendations
- Lower bounds on the bayes risk eor statistical prediction problems
- Risk bounds of learning processes for Lévy processes
- A remark about a learning risk lower bound
- Excess risk bounds in robust empirical risk minimization
- On Bayes risk lower bounds
- Risk bounds when learning infinitely many response functions by ordinary linear regression
- scientific article; zbMATH DE number 4056794
- Risk bounds for random regression graphs
- Finite-Sample Risk Bounds for Maximum Likelihood Estimation With Arbitrary Penalties
Cites work
- scientific article; zbMATH DE number 4170917 (Why is no real title available?)
- scientific article; zbMATH DE number 4032498 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 1301684 (Why is no real title available?)
- scientific article; zbMATH DE number 1064667 (Why is no real title available?)
- scientific article; zbMATH DE number 3795074 (Why is no real title available?)
- scientific article; zbMATH DE number 3795075 (Why is no real title available?)
- scientific article; zbMATH DE number 3446442 (Why is no real title available?)
- A Bennett concentration inequality and its application to suprema of empirical processes
- A New Lower Bound for Multiple Hypothesis Testing
- Adaptive estimation of the intensity of inhomogeneous Poisson processes via concentration inequalities
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Information-theoretic determination of minimax rates of convergence
- Minimax theory of image reconstruction
- Minimum contrast estimators on sieves: Exponential bounds and rates of convergence
- New concentration inequalities in product spaces
- Optimal aggregation of classifiers in statistical learning.
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- Risk bounds for model selection via penalization
- Smooth discrimination analysis
- Some applications of concentration inequalities to statistics
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Uniform Central Limit Theorems
Cited in
(87)- A new method for estimation and model selection: \(\rho\)-estimation
- Classification with minimax fast rates for classes of Bayes rules with sparse representation
- A local maximal inequality under uniform entropy
- Penalized empirical risk minimization over Besov spaces
- Statistical performance of support vector machines
- Ranking and empirical minimization of \(U\)-statistics
- Nonasymptotic bounds for vector quantization in Hilbert spaces
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Learning the distribution of latent variables in paired comparison models with round-robin scheduling
- A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning
- Learning from binary labels with instance-dependent noise
- General nonexact oracle inequalities for classes with a subexponential envelope
- Minimax semi-supervised set-valued approach to multi-class classification
- Risk bounds for CART classifiers under a margin condition
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Optimal upper and lower bounds for the true and empirical excess risks in heteroscedastic least-squares regression
- Discussion of ``On concentration for (regularized) empirical risk minimization by Sara van de Geer and Martin Wainwright
- Set structured global empirical risk minimizers are rate optimal in general dimensions
- Margin-adaptive model selection in statistical learning
- Sample complexity of sample average approximation for conditional stochastic optimization
- Re-thinking high-dimensional mathematical statistics. Abstracts from the workshop held May 15--21, 2022
- Asymptotics in empirical risk minimization
- A high-dimensional Wilks phenomenon
- Diametrical risk minimization: theory and computations
- On robust learning in the canonical change point problem under heavy tailed errors in finite and growing dimensions
- Fast rates for empirical vector quantization
- Fast learning rates for plug-in classifiers
- Model selection in utility-maximizing binary prediction
- Best subset binary prediction
- Inverse statistical learning
- scientific article; zbMATH DE number 7306919 (Why is no real title available?)
- Upper bounds and aggregation in bipartite ranking
- A theory of learning with corrupted labels
- Classification with reject option
- Empirical risk minimization is optimal for the convex aggregation problem
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- A statistical view of clustering performance through the theory of \(U\)-processes
- Rates of convergence in active learning
- On concentration for (regularized) empirical risk minimization
- Simultaneous adaptation to the margin and to complexity in classification
- Bounds on margin distributions in learning problems
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Minimax fast rates for discriminant analysis with errors in variables
- Improved Risk Tail Bounds for On-Line Algorithms
- Active learning for cost-sensitive classification
- Improved classification rates under refined margin conditions
- Risk minimization and minimum description for linear discriminant functions
- Optimal linear discriminators for the discrete choice model in growing dimensions
- Robust machine learning by median-of-means: theory and practice
- Noisy discriminant analysis with boundary assumptions
- Concentration inequalities and asymptotic results for ratio type empirical processes
- Classification algorithms using adaptive partitioning
- Learning Theory
- Optimal exponential bounds on the accuracy of classification
- Sharper lower bounds on the performance of the empirical risk minimization algorithm
- Optimal model selection for density estimation of stationary data under various mixing condi\-tions
- Learning without concentration
- Model selection by bootstrap penalization for classification
- Theory of Classification: a Survey of Some Recent Advances
- Optimal rates of aggregation in classification under low noise assumption
- Relative deviation learning bounds and generalization with unbounded loss functions
- A no-free-lunch theorem for multitask learning
- scientific article; zbMATH DE number 7370646 (Why is no real title available?)
- Risk bounds for new M-estimation problems
- Robust supervised learning with coordinate gradient descent
- scientific article; zbMATH DE number 7415076 (Why is no real title available?)
- Nonexact oracle inequalities, \(r\)-learnability, and fast rates
- Bandwidth selection in kernel empirical risk minimization via the gradient
- On the optimality of sample-based estimates of the expectation of the empirical minimizer
- A strongly polynomial algorithm for approximate Forster transforms and its application to halfspace learning
- Measuring the capacity of sets of functions in the analysis of ERM
- PAC learning halfspaces in non-interactive local differential privacy model with public unlabeled data
- On biased random walks, corrupted intervals, and learning under adversarial design
- Fast rate of convergence in high-dimensional linear discriminant analysis
- Distinctive features of minimization of a risk functional in mass data sets
- Random subclass bounds.
- Optimal functional supervised classification with separation condition
- scientific article; zbMATH DE number 7064063 (Why is no real title available?)
- Orthogonal statistical learning
- On regression and classification with possibly missing response variables in the data
- Learning with risks based on M-location
- Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model
- Multiclass learnability and the ERM principle
- Gibbs posterior concentration rates under sub-exponential type losses
- Theoretical analysis of cross-validation for estimating the risk of the \(k\)-nearest neighbor classifier
- Sparse quantile regression
- Risk Bounds for CART Regression Trees
This page was built for publication: Risk bounds for statistical learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q869973)