Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
From MaRDI portal
Publication:2861817
DOI10.1080/01621459.2013.803972OpenAlexW3103645229MaRDI QIDQ2861817FDOQ2861817
Publication date: 11 November 2013
Published in: Journal of the American Statistical Association (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1605.03310
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- A unified approach to model selection and sparse recovery using regularized least squares
- The Adaptive Lasso and Its Oracle Properties
- Least angle regression. (With discussion)
- Weak convergence and empirical processes. With applications to statistics
- Pathwise coordinate optimization
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- One-step sparse estimates in nonconcave penalized likelihood models
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional classification using features annealed independence rules
- Sparsity oracle inequalities for the Lasso
- Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Risk bounds for model selection via penalization
- Nonconcave penalized likelihood with a diverging number of parameters.
- Stable recovery of sparse overcomplete representations in the presence of noise
- Regularization of Wavelet Approximations
- Nonconcave Penalized Likelihood With NP-Dimensionality
- High-Dimensional Sparse Additive Hazards Regression
Cited In (28)
- Title not available (Why is that?)
- Predictive stability criteria for penalty selection in linear models
- Parallel integrative learning for large-scale multi-response regression with incomplete outcomes
- High dimensional generalized linear models for temporal dependent data
- Nonsparse Learning with Latent Variables
- Variable selection for high‐dimensional generalized linear model with block‐missing data
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- High-dimensional statistical inference via DATE
- Title not available (Why is that?)
- Conditional quantile correlation screening procedure for ultrahigh-dimensional varying coefficient models
- Principal varying coefficient estimator for high-dimensional models
- Title not available (Why is that?)
- RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs
- Blessing of massive scale: spatial graphical model estimation with a total cardinality constraint approach
- Title not available (Why is that?)
- Model Selection for High-Dimensional Quadratic Regression via Regularization
- Regularized estimation in sparse high-dimensional time series models
- Leveraging mixed and incomplete outcomes via reduced-rank modeling
- Semi-Standard Partial Covariance Variable Selection When Irrepresentable Conditions Fail
- Best subset selection via a modern optimization lens
- Greedy forward regression for variable screening
- Innovated interaction screening for high-dimensional nonlinear classification
- Variable Selection With Second-Generation P-Values
- IPAD: Stable Interpretable Forecasting with Knockoffs Inference
- Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes
- Asymptotic equivalence of regularization methods in thresholded parameter space
- Statistical insights into deep neural network learning in subspace classification
- Penalized least squares estimation with weakly dependent data
This page was built for publication: Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2861817)