Suboptimality of constrained least squares and improvements via non-linear predictors
From MaRDI portal
Publication:2108490
DOI10.3150/22-BEJ1465MaRDI QIDQ2108490FDOQ2108490
Authors: Tomas Vaškevičius, Nikita Zhivotovskiy
Publication date: 19 December 2022
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2009.09304
ridge regressionempirical processesconstrained least squaresaverage stabilityVovk-Azoury-Warmuth forecaster
Nonparametric inference (62Gxx) Linear inference, regression (62Jxx) Artificial intelligence (68Txx)
Cites Work
- Fast learning rates in statistical inference through aggregation
- Title not available (Why is that?)
- Learning Theory and Kernel Machines
- Prediction, Learning, and Games
- Learning by mirror averaging
- High-Dimensional Statistics
- High-Dimensional Probability
- Probability in Banach spaces. Isoperimetry and processes
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Understanding Machine Learning
- Robust linear least squares regression
- A distribution-free theory of nonparametric regression
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- Local Rademacher complexities
- An Introduction to Matrix Concentration Inequalities
- Logarithmic regret algorithms for online convex optimization
- Minimax estimation via wavelet shrinkage
- Empirical risk minimization is optimal for the convex aggregation problem
- Learning without concentration
- Performance of empirical risk minimization in linear aggregation
- Rates of convergence for minimum contrast estimators
- Random vectors in the isotropic position
- On risk bounds in isotonic and other shape restricted regression problems
- Random design analysis of ridge regression
- A new perspective on least squares under convex constraint
- Sums of random Hermitian matrices and an inequality by Rudelson
- Sharp oracle inequalities for least squares estimators in shape restricted regression
- The lower tail of random quadratic forms with applications to ordinary least squares
- Empirical minimization
- How Many Variables Should be Entered in a Regression Equation?
- Competitive On-line Statistics
- Relative loss bounds for on-line density estimation with the exponential family of distributions
- Empirical entropy, minimax regret and minimax risk
- Relative expected instantaneous loss bounds
- On optimality of empirical risk minimization in linear aggregation
- Isotonic regression in general dimensions
- Distribution-free robust linear regression
- Risk minimization by median-of-means tournaments
- Robust covariance estimation under \(L_4\)-\(L_2\) norm equivalence
- Robust statistical learning with Lipschitz and convex loss functions
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Title not available (Why is that?)
- Title not available (Why is that?)
- Extending the scope of the small-ball method
- The sample complexity of learning linear predictors with the squared loss
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Uniform Hanson-Wright type concentration inequalities for unbounded entries via the entropy method
Cited In (2)
This page was built for publication: Suboptimality of constrained least squares and improvements via non-linear predictors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2108490)