PAC-Bayesian bounds for sparse regression estimation with exponential weights
From MaRDI portal
Publication:1952177
DOI10.1214/11-EJS601zbMath1274.62463arXiv1009.2707MaRDI QIDQ1952177
Publication date: 28 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1009.2707
exponential weights; high-dimensional regression; PAC-Bayesian inequalities; sparsity oracle inequality
62G08: Nonparametric regression and quantile regression
62J07: Ridge regression; shrinkage estimators (Lasso)
62J05: Linear regression; mixed models
62F15: Bayesian inference
68T05: Learning and adaptive systems in artificial intelligence
62B10: Statistical aspects of information-theoretic topics
Related Items
Structured, Sparse Aggregation, General Robust Bayes Pseudo-Posteriors: Exponential Convergence Results with Applications, Multiple Kernel Learningの学習理論, Prediction of time series by statistical learning: general losses and fast rates, Sparse estimation by exponential weighting, Robust Bayes estimation using the density power divergence, Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression, Ordered smoothers with exponential weighting, Kullback-Leibler aggregation and misspecified generalized linear models, Concentration inequalities for the exponential weighting method, Exponential screening and optimal rates of sparse estimation, PAC-Bayesian high dimensional bipartite ranking, A quasi-Bayesian perspective to online clustering, On the exponentially weighted aggregate with the Laplace prior, Sharp oracle inequalities for aggregation of affine estimators, PAC-Bayesian estimation and prediction in sparse additive models, Upper bounds and aggregation in bipartite ranking, Estimation from nonlinear observations via convex programming with application to bilinear regression, Comments on: ``On active learning methods for manifold data, Exponential weights in multivariate regression and a low-rankness favoring prior, A Bayesian approach for noisy matrix completion: optimal rate under general sampling distribution, Aggregation of affine estimators, Estimation and variable selection with exponential weights, Optimal learning with \textit{Q}-aggregation, On some recent advances on high dimensional Bayesian statistics
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- Exponential screening and optimal rates of sparse estimation
- The Dantzig selector and sparsity oracle inequalities
- Generalized mirror averaging and \(D\)-convex aggregation
- PAC-Bayesian bounds for randomized empirical risk minimizers
- From \(\varepsilon\)-entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Learning by mirror averaging
- Estimating the dimension of a model
- Aggregating regression procedures to improve performance
- Laplace transform estimates and deviation inequalities
- Least angle regression. (With discussion)
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Aggregated estimators and empirical complexity for least square regression
- On the conditions used to prove oracle results for the Lasso
- Some PAC-Bayesian theorems
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- Information Theory and Mixing Least-Squares Regressions
- A Statistical View of Some Chemometrics Regression Tools
- On Recovery of Sparse Signals Via $\ell _{1}$ Minimization
- Learning Theory and Kernel Machines
- Regularization and Variable Selection Via the Elastic Net
- DINS, a MIP Improvement Heuristic
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Some Comments on C P
- Introduction to nonparametric estimation