Suboptimality of constrained least squares and improvements via non-linear predictors
From MaRDI portal
Publication:2108490
Cites work
- scientific article; zbMATH DE number 1522808 (Why is no real title available?)
- scientific article; zbMATH DE number 7625184 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- A new perspective on least squares under convex constraint
- An introduction to matrix concentration inequalities
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Competitive On-line Statistics
- Distribution-free robust linear regression
- Empirical entropy, minimax regret and minimax risk
- Empirical minimization
- Empirical risk minimization is optimal for the convex aggregation problem
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Extending the scope of the small-ball method
- Fast learning rates in statistical inference through aggregation
- High-dimensional probability. An introduction with applications in data science
- High-dimensional statistics. A non-asymptotic viewpoint
- How Many Variables Should be Entered in a Regression Equation?
- Isotonic regression in general dimensions
- Learning Theory and Kernel Machines
- Learning by mirror averaging
- Learning without concentration
- Local Rademacher complexities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Logarithmic regret algorithms for online convex optimization
- Minimax estimation via wavelet shrinkage
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- On optimality of empirical risk minimization in linear aggregation
- On risk bounds in isotonic and other shape restricted regression problems
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Performance of empirical risk minimization in linear aggregation
- Prediction, Learning, and Games
- Probability in Banach spaces. Isoperimetry and processes
- Random design analysis of ridge regression
- Random vectors in the isotropic position
- Rates of convergence for minimum contrast estimators
- Relative expected instantaneous loss bounds
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- Risk minimization by median-of-means tournaments
- Robust covariance estimation under \(L_4\)-\(L_2\) norm equivalence
- Robust linear least squares regression
- Robust statistical learning with Lipschitz and convex loss functions
- Sharp oracle inequalities for least squares estimators in shape restricted regression
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Sums of random Hermitian matrices and an inequality by Rudelson
- The lower tail of random quadratic forms with applications to ordinary least squares
- The sample complexity of learning linear predictors with the squared loss
- Understanding machine learning. From theory to algorithms
- Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method
Cited in
(2)
This page was built for publication: Suboptimality of constrained least squares and improvements via non-linear predictors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2108490)