Performance of empirical risk minimization in linear aggregation
From MaRDI portal
Publication:282546
DOI10.3150/15-BEJ701zbMATH Open1346.60075arXiv1402.5763OpenAlexW3103224927MaRDI QIDQ282546FDOQ282546
Authors: Guillaume Lecué, Shahar Mendelson
Publication date: 12 May 2016
Published in: Bernoulli (Search for Journal in Brave)
Abstract: We study conditions under which, given a dictionary and an i.i.d. sample , the empirical minimizer in relative to the squared loss, satisfies that with high probability [R�igl( ilde{f}^{mathrm{ERM}}�igr)leqinf_{finoperatorname {span}(F)}R(f)+r_N(M),] where is the squared risk and is of the order of . Among other results, we prove that a uniform small-ball estimate for functions in is enough to achieve that goal when the noise is independent of the design.
Full work available at URL: https://arxiv.org/abs/1402.5763
Recommendations
- On optimality of empirical risk minimization in linear aggregation
- Empirical risk minimization is optimal for the convex aggregation problem
- On the optimality of the empirical risk minimization procedure for the convex aggregation problem
- Aggregation via empirical risk minimization
- Sparsity in penalized empirical risk minimization
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- Optimal learning with \textit{Q}-aggregation
- Learning Theory and Kernel Machines
- Aggregation via empirical risk minimization
- Learning by mirror averaging
- Support Vector Machines
- Concentration inequalities. A nonasymptotic theory of independence
- Title not available (Why is that?)
- Robust linear least squares regression
- Mixing strategies for density estimation.
- Functional aggregation for nonparametric regression.
- Aggregation for Gaussian regression
- Adaptive Regression by Mixing
- Linear and convex aggregation of density estimators
- Interactions between compressed sensing random matrices and high dimensional geometry
- Title not available (Why is that?)
- Aggregating regression procedures to improve performance
- Sparse recovery under weak moment assumptions
- Lectures on probability theory and statistics. Ecole d'Eté de probabilités de Saint-Flour XXV - 1995. Lectures given at the summer school in Saint-Flour, France, July 10-26, 1995
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Empirical risk minimization is optimal for the convex aggregation problem
- Minimax rate of convergence and the performance of empirical risk minimization in phase recovery
- Learning without concentration
- Boosting. Foundations and algorithms.
- Bounding the smallest singular value of a random matrix without concentration
- Neural Network Learning
- A remark on the diameter of random sections of convex bodies
- Sharper lower bounds on the performance of the empirical risk minimization algorithm
Cited In (13)
- On optimality of empirical risk minimization in linear aggregation
- Suboptimality of constrained least squares and improvements via non-linear predictors
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Regularization and the small-ball method. II: Complexity dependent error rates
- Distribution-free robust linear regression
- An elementary analysis of ridge regression with random design
- Title not available (Why is that?)
- A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning
- Robust statistical learning with Lipschitz and convex loss functions
- Mean estimation and regression under heavy-tailed distributions: A survey
- On aggregation for heavy-tailed classes
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
Uses Software
This page was built for publication: Performance of empirical risk minimization in linear aggregation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q282546)