Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors
From MaRDI portal
Publication:2132004
Abstract: We consider a sparse linear regression model with unknown symmetric error under the high-dimensional setting. The true error distribution is assumed to belong to the locally -H"{o}lder class with an exponentially decreasing tail, which does not need to be sub-Gaussian. We obtain posterior convergence rates of the regression coefficient and the error density, which are nearly optimal and adaptive to the unknown sparsity level. Furthermore, we derive the semi-parametric Bernstein-von Mises (BvM) theorem to characterize asymptotic shape of the marginal posterior for regression coefficients. Under the sub-Gaussianity assumption on the true score function, strong model selection consistency for regression coefficients are also obtained, which eventually asserts the frequentist's validity of credible sets.
Recommendations
- Bayesian sparse linear regression with unknown symmetric error
- Bayesian linear regression with sparse priors
- The semi-parametric Bernstein-von Mises theorem for regression models with symmetric errors
- Bayesian regression with nonparametric heteroskedasticity
- Finite sample posterior concentration in high-dimensional regression
Cites work
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Bernstein-von Mises theorem for smooth functionals in semiparametric models
- A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables
- A bound on tail probabilities for quadratic forms in independent random variables whose distributions are not necessarily symmetric
- A novel approach to Bayesian consistency
- Adaptive Bayesian density estimation with location-scale mixtures
- Adaptive Bayesian multivariate density estimation with Dirichlet mixtures
- Adaptive density estimation based on a mixture of gammas
- Asymptotic Statistics
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- Bayesian linear regression with sparse priors
- Bayesian sparse linear regression with unknown symmetric error
- Bayesian variable selection with shrinking and diffusing priors
- Calibration and empirical Bayes variable selection
- Conditions for posterior contraction in the sparse normal means problem
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Consistent model selection criteria for quadratically supported risks
- Dirichlet-Laplace priors for optimal shrinkage
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Finite sample Bernstein-von Mises theorem for semiparametric problems
- Generalized double Pareto shrinkage
- Needles and straw in a haystack: posterior concentration for possibly sparse sequences
- Nonparametric Bernstein-von Mises theorems in Gaussian white noise
- On the Bernstein-von Mises phenomenon for nonparametric Bayes procedures
- On the computational complexity of high-dimensional Bayesian variable selection
- On the conditions used to prove oracle results for the Lasso
- Posterior Inference in Bayesian Quantile Regression with Asymmetric Laplace Likelihood
- Posterior asymptotics of nonparametric location-scale mixtures for multivariate density estimation
- Posterior consistency in linear models under shrinkage priors
- Rate minimaxity of the Lasso and Dantzig selector for the \(l_{q}\) loss in \(l_{r}\) balls
- Regularization and Variable Selection Via the Elastic Net
- Scalable Bayesian variable selection using nonlocal prior densities in ultrahigh-dimensional settings
- Simultaneous analysis of Lasso and Dantzig selector
- Skinny Gibbs: a consistent and scalable Gibbs sampler for model selection
- Sparsity and Smoothness Via the Fused Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- The Adaptive Lasso and Its Oracle Properties
- The Bernstein-von Mises theorem under misspecification
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The horseshoe estimator for sparse signals
- The semi-parametric Bernstein-von Mises theorem for regression models with symmetric errors
- The spike-and-slab LASSO
- Tractable Bayesian variable selection: beyond normality
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(5)- Bayesian sparse linear regression with unknown symmetric error
- Approximating posteriors with high-dimensional nuisance parameters via integrated rotated Gaussian approximation
- Unified Bayesian theory of sparse linear regression with nuisance parameters
- Adaptive variational Bayes: optimality, computation and applications
- Strong replica symmetry in high-dimensional optimal Bayesian inference
This page was built for publication: Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2132004)