Performance of empirical risk minimization in linear aggregation
From MaRDI portal
Abstract: We study conditions under which, given a dictionary and an i.i.d. sample , the empirical minimizer in relative to the squared loss, satisfies that with high probability [R�igl( ilde{f}^{mathrm{ERM}}�igr)leqinf_{finoperatorname {span}(F)}R(f)+r_N(M),] where is the squared risk and is of the order of . Among other results, we prove that a uniform small-ball estimate for functions in is enough to achieve that goal when the noise is independent of the design.
Recommendations
- On optimality of empirical risk minimization in linear aggregation
- Empirical risk minimization is optimal for the convex aggregation problem
- On the optimality of the empirical risk minimization procedure for the convex aggregation problem
- Aggregation via empirical risk minimization
- Sparsity in penalized empirical risk minimization
Cites work
- scientific article; zbMATH DE number 3860199 (Why is no real title available?)
- scientific article; zbMATH DE number 1254560 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- A remark on the diameter of random sections of convex bodies
- Adaptive Regression by Mixing
- Aggregating regression procedures to improve performance
- Aggregation for Gaussian regression
- Aggregation via empirical risk minimization
- Boosting. Foundations and algorithms.
- Bounding the smallest singular value of a random matrix without concentration
- Concentration inequalities. A nonasymptotic theory of independence
- Empirical risk minimization is optimal for the convex aggregation problem
- Functional aggregation for nonparametric regression.
- Interactions between compressed sensing random matrices and high dimensional geometry
- Learning Theory and Kernel Machines
- Learning by mirror averaging
- Learning without concentration
- Lectures on probability theory and statistics. Ecole d'Eté de probabilités de Saint-Flour XXV - 1995. Lectures given at the summer school in Saint-Flour, France, July 10-26, 1995
- Linear and convex aggregation of density estimators
- Minimax rate of convergence and the performance of empirical risk minimization in phase recovery
- Mixing strategies for density estimation.
- Neural Network Learning
- Optimal learning with \textit{Q}-aggregation
- Robust linear least squares regression
- Sharper lower bounds on the performance of the empirical risk minimization algorithm
- Sparse recovery under weak moment assumptions
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Support Vector Machines
- Weak convergence and empirical processes. With applications to statistics
Cited in
(13)- On optimality of empirical risk minimization in linear aggregation
- Suboptimality of constrained least squares and improvements via non-linear predictors
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Distribution-free robust linear regression
- Regularization and the small-ball method. II: Complexity dependent error rates
- An elementary analysis of ridge regression with random design
- scientific article; zbMATH DE number 7625184 (Why is no real title available?)
- A MOM-based ensemble method for robustness, subsampling and hyperparameter tuning
- Robust statistical learning with Lipschitz and convex loss functions
- Mean estimation and regression under heavy-tailed distributions: A survey
- On aggregation for heavy-tailed classes
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
This page was built for publication: Performance of empirical risk minimization in linear aggregation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q282546)