Lasso guarantees for \(\beta \)-mixing heavy-tailed time series
From MaRDI portal
Publication:2196212
DOI10.1214/19-AOS1840zbMath1450.62117arXiv1708.01505MaRDI QIDQ2196212
Ambuj Tewari, Kam Chung Wong, Zifan Li
Publication date: 28 August 2020
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1708.01505
Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Ridge regression; shrinkage estimators (Lasso) (62J07) Statistics of extreme values; tail inference (62G32) Reliability and life testing (62N05)
Related Items
Finite sample theory for high-dimensional functional/scalar time series with applications ⋮ Penalized estimation of threshold auto-regressive models with many components and thresholds ⋮ Fast and Scalable Algorithm for Detection of Structural Breaks in Big VAR Models ⋮ Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations ⋮ High-dimensional latent panel quantile regression with an application to asset pricing ⋮ Adaptive group Lasso neural network models for functions of few variables and time-dependent data ⋮ The EAS approach for graphical selection consistency in vector autoregression models ⋮ Oracle inequality for sparse trace regression models with exponential \(\beta\)-mixing errors ⋮ Predictive quantile regression with mixed roots and increasing dimensions: the ALQR approach ⋮ Testing the martingale difference hypothesis in high dimension ⋮ Rate-optimal robust estimation of high-dimensional vector autoregressive models ⋮ Sparse principal component analysis for high‐dimensional stationary time series ⋮ Optimal covariance matrix estimation for high-dimensional noise in high-frequency data ⋮ Robust multiscale estimation of time-average variance for time series segmentation ⋮ Stochastic Saddle Point Problems with Decision-Dependent Distributions ⋮ Central limit theorems for high dimensional dependent data ⋮ Structural inference in sparse high-dimensional vector autoregressions ⋮ Lasso Inference for High-Dimensional Time Series ⋮ Concentration Inequalities for Statistical Inference ⋮ High-dimensional inference for linear model with correlated errors ⋮ Confidence intervals for parameters in high-dimensional sparse vector autoregression ⋮ On consistency and sparsity for high-dimensional functional time series with application to autoregressions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Regularized estimation in sparse high-dimensional time series models
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Exact post-selection inference, with application to the Lasso
- Oracle inequalities for high dimensional vector autoregressions
- Penalized least squares estimation with weakly dependent data
- Statistics for high-dimensional data. Methods, theory and applications.
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Autoregressive process modeling via the Lasso procedure
- A Bernstein type inequality and moderate deviations for weakly dependent sequences
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Infinite-dimensional VARs and factor models
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Hanson-Wright inequality and sub-Gaussian concentration
- \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors
- Estimating beta-mixing coefficients via histograms
- Basic properties of strong mixing conditions. A survey and some open questions
- Rates of convergence for empirical processes of stationary mixing sequences
- Gaussian approximation for high dimensional time series
- Learning and generalisation. With applications to neural networks.
- The convex geometry of linear inverse problems
- Sparsity considerations for dependent variables
- High-dimensional autocovariance matrices and optimal linear prediction
- Simultaneous analysis of Lasso and Dantzig selector
- Covariance and precision matrix estimation for high-dimensional time series
- RANDOM MATRICES: SHARP CONCENTRATION OF EIGENVALUES
- Reconstruction From Anisotropic Random Measurements
- An Introduction to Heavy-Tailed and Subexponential Distributions
- Non-linear time series and Markov chains
- A CENTRAL LIMIT THEOREM AND A STRONG MIXING CONDITION
- Econometric Analysis of High Dimensional VARs Featuring a Dominant Unit
- Regression coefficient and autoregressive order shrinkage and selection via the lasso
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Towards a Unified Approach for Proving Geometric Ergodicity and Mixing Properties of Nonlinear Autoregressive Processes
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Performance bounds for parameter estimates of high-dimensional linear models with correlated errors