Estimation and variable selection with exponential weights
From MaRDI portal
Abstract: In the context of a linear model with a sparse coefficient vector, exponential weights methods have been shown to be achieve oracle inequalities for prediction. We show that such methods also succeed at variable selection and estimation under the necessary identifiability condition on the design matrix, instead of much stronger assumptions required by other methods such as the Lasso or the Dantzig Selector. The same analysis yields consistency results for Bayesian methods and BIC-type variable selection under similar conditions.
Recommendations
- Variable selection in expectile regression
- Density estimation via exponential model selection
- Exponential-Bound Property of Estimators and Variable Selection in Generalized Additive Models
- Exponential weights in multivariate regression and a low-rankness favoring prior
- Sparse estimation by exponential weighting
- PARAMETER ESTIMATION IN EXPONENTIAL MODELS
- Robust Variable Selection With Exponential Squared Loss
- Weighted least squares estimation for exchangeable binary data
- Structured variable selection and estimation
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 47363 (Why is no real title available?)
- scientific article; zbMATH DE number 1034037 (Why is no real title available?)
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Aggregating regression procedures to improve performance
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation
- Exponential screening and optimal rates of sparse estimation
- Extended Bayesian information criteria for model selection with large model spaces
- Gaussian model selection
- Generalized mirror averaging and D-convex aggregation
- High-dimensional graphs and variable selection with the Lasso
- How well can we estimate a sparse vector?
- Information Theory and Mixing Least-Squares Regressions
- Lasso-type recovery of sparse representations for high-dimensional data
- Learning by mirror averaging
- MAP model selection in Gaussian regression
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Mixing least-squares estimators when the variance is unknown
- Near-ideal model selection by \(\ell _{1}\) minimization
- Nearly unbiased variable selection under minimax concave penalty
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Nonconcave penalized likelihood with a diverging number of parameters.
- Optimality of Graphlet Screening in High Dimensional Variable Selection
- Oracle inequalities and optimal inference under group sparsity
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Reconstruction From Anisotropic Random Measurements
- Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Sharp oracle inequalities for aggregation of affine estimators
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse estimation by exponential weighting
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- UPS delivers optimal phase diagram in high-dimensional variable selection
Cited in
(19)- Fitting sparse linear models under the sufficient and necessary condition for model identification
- Aggregated Expectile Regression by Exponential Weighting
- Exponential screening and optimal rates of sparse estimation
- Bayesian linear regression with sparse priors
- Comparing and Weighting Imperfect Models Using D-Probabilities
- Empirical Bayes inference in sparse high-dimensional generalized linear models
- Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Concentration inequalities for the exponential weighting method
- Inference without compatibility: using exponential weighting for inference on a parameter of a linear model
- On the exponentially weighted aggregate with the Laplace prior
- Empirical priors and coverage of posterior credible sets in a sparse normal mean model
- Statistical and computational aspects of learning with complex structure. Abstracts from the workshop held May 5--11, 2019
- scientific article; zbMATH DE number 3867203 (Why is no real title available?)
- Ordered smoothers with exponential weighting
- Variable selection via penalized credible regions with Dirichlet-Laplace global-local shrinkage priors
- Sparse estimation by exponential weighting
- Empirical priors for prediction in sparse high-dimensional linear regression
- Data-driven priors and their posterior concentration rates
This page was built for publication: Estimation and variable selection with exponential weights
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2447091)