On aggregation for heavy-tailed classes
From MaRDI portal
Abstract: We introduce an alternative to the notion of `fast rate' in Learning Theory, which coincides with the optimal error rate when the given class happens to be convex and regular in some sense. While it is well known that such a rate cannot always be attained by a learning procedure (i.e., a procedure that selects a function in the given class), we introduce an aggregation procedure that attains that rate under rather minimal assumptions -- for example, that the and norms are equivalent on the linear span of the class for some , and the target random variable is square-integrable.
Recommendations
Cites work
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 194093 (Why is no real title available?)
- scientific article; zbMATH DE number 1522808 (Why is no real title available?)
- A remark on the diameter of random sections of convex bodies
- Aggregation via empirical risk minimization
- Bounding the smallest singular value of a random matrix without concentration
- Concentration inequalities. A nonasymptotic theory of independence
- Fast learning rates in statistical inference through aggregation
- Introduction to nonparametric estimation
- Learning by mirror averaging
- Learning without concentration
- Learning without concentration for general loss functions
- Minimax rate of convergence and the performance of empirical risk minimization in phase recovery
- Neural Network Learning
- Noise stability of functions with low influences: invariance and optimality
- Performance of empirical risk minimization in linear aggregation
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Sharper bounds for Gaussian and empirical processes
- Some limit theorems for empirical processes (with discussion)
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- The concentration of measure phenomenon
- Upper bounds on product and multiplier empirical processes
- Weak convergence and empirical processes. With applications to statistics
Cited in
(8)- On Monte-Carlo methods in convex stochastic optimization
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Distribution-free robust linear regression
- Regularization, sparse recovery, and median-of-means tournaments
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Optimal rates of aggregation in classification under low noise assumption
- Learning without concentration for general loss functions
- Learning from MOM's principles: Le Cam's approach
This page was built for publication: On aggregation for heavy-tailed classes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2363649)