Estimator selection in the Gaussian setting
DOI10.1214/13-aihp539zbMath1298.62113arXiv1007.2096MaRDI QIDQ141397
Yannick Baraud, Sylvie Huet, Christophe Giraud, Sylvie Huet, Yannick Baraud, Christophe Giraud
Publication date: 1 August 2014
Published in: Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, Annales de l'Institut Henri Poincaré. Probabilités et Statistiques (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1007.2096
model selectionridge regressionkernel estimatorvariable selectionrandom forestlassoelastic netestimator selectionlinear estimatorPLS1 regression
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items
Uses Software
Cites Work
- Estimator selection in the Gaussian setting
- The Adaptive Lasso and Its Oracle Properties
- Estimator selection with respect to Hellinger-type risks
- Linear and convex aggregation of density estimators
- Structural adaptation via \(\mathbb L_p\)-norm oracle inequalities
- A survey of cross-validation procedures for model selection
- A universal procedure for aggregating estimators
- Mixing least-squares estimators when the variance is unknown
- Gaussian model selection with an unknown variance
- Model selection in nonparametric regression
- Combining different procedures for adaptive regression
- Model selection for regression on a fixed design
- Mixing strategies for density estimation.
- Functional aggregation for nonparametric regression.
- Least angle regression. (With discussion)
- Random approximants and neural networks
- Model selection by resampling penalization
- Aggregation for Gaussian regression
- On the ``degrees of freedom of the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Model selection via testing: an alternative to (penalized) maximum likelihood estimators.
- Information Theory and Mixing Least-Squares Regressions
- Atomic Decomposition by Basis Pursuit
- Adaptive Regression by Mixing
- Learning Theory and Kernel Machines
- Regularization and Variable Selection Via the Elastic Net
- Ridge Regression: Biased Estimation for Nonorthogonal Problems
- Ridge Regression: Applications to Nonorthogonal Problems
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
- Gaussian model selection
- Random forests
- High-dimensional regression with unknown variance
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item