Estimation and variable selection with exponential weights
From MaRDI portal
Publication:2447091
DOI10.1214/14-EJS883zbMath1294.62164arXiv1208.2635MaRDI QIDQ2447091
Karim Lounici, Ery Arias-Castro
Publication date: 24 April 2014
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1208.2635
estimationmodel selectionGibbs samplervariable selectionexponential weightssparse linear modelidentifiability condition
Related Items
Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior, Fitting sparse linear models under the sufficient and necessary condition for model identification, Variable selection via penalized credible regions with Dirichlet-Laplace global-local shrinkage priors, Unnamed Item, Ordered smoothers with exponential weighting, Bayesian linear regression with sparse priors, Comparing and Weighting Imperfect Models Using D-Probabilities, Empirical priors and coverage of posterior credible sets in a sparse normal mean model, Concentration inequalities for the exponential weighting method, On the exponentially weighted aggregate with the Laplace prior, Data-driven priors and their posterior concentration rates, Statistical and computational aspects of learning with complex structure. Abstracts from the workshop held May 5--11, 2019
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- UPS delivers optimal phase diagram in high-dimensional variable selection
- Exponential screening and optimal rates of sparse estimation
- Oracle inequalities and optimal inference under group sparsity
- Generalized mirror averaging and \(D\)-convex aggregation
- Near-ideal model selection by \(\ell _{1}\) minimization
- Learning by mirror averaging
- Lasso-type recovery of sparse representations for high-dimensional data
- Mixing least-squares estimators when the variance is unknown
- Aggregating regression procedures to improve performance
- Nonconcave penalized likelihood with a diverging number of parameters.
- How well can we estimate a sparse vector?
- Sharp oracle inequalities for aggregation of affine estimators
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- MAP model selection in Gaussian regression
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Optimality of Graphlet Screening in High Dimensional Variable Selection
- Reconstruction From Anisotropic Random Measurements
- Extended Bayesian information criteria for model selection with large model spaces
- Can the strengths of AIC and BIC be shared? A conflict between model indentification and regression estimation
- Information Theory and Mixing Least-Squares Regressions
- Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Orthogonal Matching Pursuit for Sparse Signal Recovery With Noise
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Gaussian model selection
- Sparse estimation by exponential weighting
- A general theory of concave regularization for high-dimensional sparse estimation problems