Bayesian factor-adjusted sparse regression
From MaRDI portal
Publication:2155305
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Selective Overview of Variable Selection in High Dimensional Feature Space (Invited Review Article)
- A simple proof of the restricted isometry property for random matrices
- A useful variant of the Davis-Kahan theorem for statisticians
- Asymptotics of empirical eigenstructure for high dimensional spiked covariance
- Bayesian linear regression with sparse priors
- Bayesian variable selection with shrinking and diffusing priors
- Confidence Intervals for Diffusion Index Forecasts and Inference for Factor-Augmented Regressions
- Contour projected dimension reduction
- Convergence rates of posterior distributions.
- Determining the Number of Factors in Approximate Factor Models
- Dirichlet-Laplace priors for optimal shrinkage
- Eigenvalue ratio test for the number of factors
- Factor modeling for high-dimensional time series: inference for the number of factors
- Factor models and variable selection in high-dimensional regression analysis
- Factor-Adjusted Regularized Model Selection
- Forecasting Using Principal Components From a Large Number of Predictors
- Generalized double Pareto shrinkage
- High dimensional covariance matrix estimation using a factor model
- High-dimensional covariance matrix estimation in approximate factor models
- High-dimensional probability. An introduction with applications in data science
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Inferential Theory for Factor Models of Large Dimensions
- Large covariance estimation by thresholding principal orthogonal complements. With discussion and authors' reply
- Local shrinkage rules, Lévy processes and regularized regression
- On consistency and sparsity for principal components analysis in high dimensions
- On the computational complexity of high-dimensional Bayesian variable selection
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- Rates of convergence of posterior distributions.
- Robust high-dimensional factor models with applications to statistical machine learning
- SLOPE is adaptive to unknown sparsity and asymptotically minimax
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse models and methods for optimal instruments with an application to eminent domain
- Sparsity oracle inequalities for the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- Statistics for high-dimensional data. Methods, theory and applications.
- Stochastic Perturbation Theory
- Sure independence screening for ultrahigh dimensional feature space. With discussion and authors' reply
- The Bayesian Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Rotation of Eigenvectors by a Perturbation. III
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- The spike-and-slab LASSO
- Uncertainty principles and ideal atomic decomposition
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(5)- Bayesian estimation of sparse dynamic factor models with order-independent and ex-post mode identification
- Factor-Adjusted Regularized Model Selection
- Bayesian sparse seemingly unrelated regressions model with variable selection and covariance estimation via the horseshoe+
- Bayesian Factor-adjusted Sparse Regression
- Sparse factor model based on trend filtering
This page was built for publication: Bayesian factor-adjusted sparse regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2155305)