On aggregation for heavy-tailed classes
From MaRDI portal
Publication:2363649
DOI10.1007/s00440-016-0720-6zbMath1371.62032arXiv1502.07097OpenAlexW2963719122MaRDI QIDQ2363649
Publication date: 25 July 2017
Published in: Probability Theory and Related Fields (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1502.07097
aggregationlearning theorylearning procedureaggregation procedureheavy-tailed classtwo-sided isomorphic estimator
Density estimation (62G07) Sums of independent random variables; random walks (60G50) Learning and adaptive systems in artificial intelligence (68T05) Prediction theory (aspects of stochastic processes) (60G25) Nonparametric inference (62G99)
Related Items
Learning without concentration for general loss functions, Unnamed Item, Regularization, sparse recovery, and median-of-means tournaments, Learning from MOM's principles: Le Cam's approach, Convergence rates of least squares regression estimators with heavy-tailed errors, On Monte-Carlo methods in convex stochastic optimization, Distribution-free robust linear regression
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Performance of empirical risk minimization in linear aggregation
- Upper bounds on product and multiplier empirical processes
- Some limit theorems for empirical processes (with discussion)
- Aggregation via empirical risk minimization
- Learning by mirror averaging
- Noise stability of functions with low influences: invariance and optimality
- Sharper bounds for Gaussian and empirical processes
- Learning without concentration for general loss functions
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Weak convergence and empirical processes. With applications to statistics
- Fast learning rates in statistical inference through aggregation
- Reconstruction and subgaussian operators in asymptotic geometric analysis
- Minimax rate of convergence and the performance of empirical risk minimization in phase recovery
- Learning without Concentration
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- Neural Network Learning
- A Remark on the Diameter of Random Sections of Convex Bodies
- Introduction to nonparametric estimation