Lasso Inference for High-Dimensional Time Series
From MaRDI portal
Abstract: In this paper we develop valid inference for high-dimensional time series. We extend the desparsified lasso to a time series setting under Near-Epoch Dependence (NED) assumptions allowing for non-Gaussian, serially correlated and heteroskedastic processes, where the number of regressors can possibly grow faster than the time dimension. We first derive an error bound under weak sparsity, which, coupled with the NED assumption, means this inequality can also be applied to the (inherently misspecified) nodewise regressions performed in the desparsified lasso. This allows us to establish the uniform asymptotic normality of the desparsified lasso under general conditions, including for inference on parameters of increasing dimensions. Additionally, we show consistency of a long-run variance estimator, thus providing a complete set of tools for performing inference in high-dimensional linear time series models. Finally, we perform a simulation exercise to demonstrate the small sample properties of the desparsified lasso in common time series settings.
Cites work
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 6438182 (Why is no real title available?)
- A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix
- A justification of conditional confidence intervals
- A maximal inequality and dependent strong laws
- A survey of \(L_1\) regression
- Adaptive Lasso for sparse high-dimensional regression models
- Asymptotic Statistics
- Asymptotics for linear processes
- Asymptotics of the principal components estimator of large factor models with weakly influential factors
- Autoregressive process modeling via the Lasso procedure
- Block bootstrap HAC robust tests: the sophistication of the naive bootstrap
- Boosting for high-dimensional linear models
- Bootstrap based inference for sparse high-dimensional time series models
- Comparison and anti-concentration bounds for maxima of Gaussian random vectors
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Double/debiased machine learning for treatment and structural parameters
- Establishing conditions for the functional central limit theorem in nonlinear and semiparametric time series processes.
- Estimation and testing under sparsity. École d'Été de Probabilités de Saint-Flour XLV -- 2015
- Exact post-selection inference, with application to the Lasso
- Extended BIC for small-\(n\)-large-\(P\) sparse GLM
- Forecasting using a large number of predictors: is Bayesian shrinkage a valid alternative to principal components?
- GARCH (1,1) processes are near epoch dependent
- Gaussian approximation for high dimensional time series
- Gaussian approximation for high dimensional vector under physical dependence
- Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors
- Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation
- High-dimensional simultaneous inference with the bootstrap
- Inference on Causal and Structural Parameters using Many Moment Inequalities
- Inference on treatment effects after selection among high-dimensional controls
- Lasso guarantees for \(\beta \)-mixing heavy-tailed time series
- Lasso-driven inference in time and space
- Lasso-type recovery of sparse representations for high-dimensional data
- Least angle and \(\ell _{1}\) penalized regression: a review
- MODEL SELECTION AND INFERENCE: FACTS AND FICTION
- Non-strong mixing autoregressive processes
- Nonlinear system theory: Another look at dependence
- On asymptotically optimal confidence regions and tests for high-dimensional models
- On the asymptotic variance of the debiased Lasso
- On the range of validity of the autoregressive sieve bootstrap
- Oracle inequalities for high dimensional vector autoregressions
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Program evaluation and causal inference with high-dimensional data
- Regression coefficient and autoregressive order shrinkage and selection via the lasso
- Regularized estimation in sparse high-dimensional time series models
- Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse estimators and the oracle property, or the return of Hodges' estimator
- Sparse models and methods for optimal instruments with an application to eminent domain
- Sparsity oracle inequalities for the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Subset selection for vector autoregressive processes using Lasso
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Uniformly valid confidence intervals post-model-selection
- Valid post-selection inference
- \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors
Cited in
(6)- Sparse principal component analysis for high‐dimensional stationary time series
- Regularized estimation in sparse high-dimensional time series models
- FNETS: Factor-Adjusted Network Estimation and Forecasting for High-Dimensional Time Series
- desla
- Lag weighted lasso for time series model
- Fused-Lasso Regularized Cholesky Factors of Large Nonstationary Covariance Matrices of Replicated Time Series
This page was built for publication: Lasso Inference for High-Dimensional Time Series
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q95760)