Lasso-type recovery of sparse representations for high-dimensional data

From MaRDI portal
Publication:1002157

DOI10.1214/07-AOS582zbMath1155.62050arXiv0806.0145OpenAlexW3106266785WikidataQ105584243 ScholiaQ105584243MaRDI QIDQ1002157

Bin Yu, Nicolai Meinshausen

Publication date: 25 February 2009

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0806.0145



Related Items

Influence measures and stability for graphical models, Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems, An analysis of penalized interaction models, Model selection consistency of Lasso for empirical data, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Near-ideal model selection by \(\ell _{1}\) minimization, High-dimensional variable selection, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Variable selection in censored quantile regression with high dimensional data, Least squares approximation with a diverging number of parameters, AIC for the Lasso in generalized linear models, Ridge regression revisited: debiasing, thresholding and bootstrap, Sub-optimality of some continuous shrinkage priors, Extensions of stability selection using subsamples of observations and covariates, Strong consistency of Lasso estimators, Oracle inequalities for the lasso in the Cox model, Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap, Statistical significance in high-dimensional linear models, Generalized Kalman smoothing: modeling and algorithms, A doubly sparse approach for group variable selection, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Variable selection in high-dimensional quantile varying coefficient models, Correlated variables in regression: clustering and sparse estimation, Grouping strategies and thresholding for high dimensional linear models, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, A generalized elastic net regularization with smoothed \(\ell _{q}\) penalty for sparse vector recovery, \(\ell_{1}\)-penalization for mixture regression models, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Robust machine learning by median-of-means: theory and practice, A two-step method for estimating high-dimensional Gaussian graphical models, Adaptive Dantzig density estimation, Autoregressive process modeling via the Lasso procedure, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, PAC-Bayesian estimation and prediction in sparse additive models, On the asymptotic properties of the group lasso estimator for linear models, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Selection of variables and dimension reduction in high-dimensional non-parametric regression, Thresholding-based iterative selection procedures for model selection and shrinkage, On the conditions used to prove oracle results for the Lasso, MAP model selection in Gaussian regression, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, Sign-constrained least squares estimation for high-dimensional regression, Transductive versions of the Lasso and the Dantzig selector, Estimation in high-dimensional linear models with deterministic design matrices, General nonexact oracle inequalities for classes with a subexponential envelope, Regularization for Cox's proportional hazards model with NP-dimensionality, Generalization of constraints for high dimensional regression problems, Which bridge estimator is the best for variable selection?, Focused vector information criterion model selection and model averaging regression with missing response, Bayesian high-dimensional screening via MCMC, Estimation of high-dimensional partially-observed discrete Markov random fields, A new perspective on least squares under convex constraint, High-dimensional Bayesian inference in nonparametric additive models, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, The sparsity and bias of the LASSO selection in high-dimensional linear regression, Generalized M-estimators for high-dimensional Tobit I models, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Detecting groups in large vector autoregressions, Estimation of average treatment effects with panel data: asymptotic theory and implementation, Regularization and the small-ball method. I: Sparse recovery, Pivotal estimation via square-root lasso in nonparametric regression, Regression analysis of locality preserving projections via sparse penalty, High-dimensional mean estimation via \(\ell_1\) penalized normal likelihood, A Cluster Elastic Net for Multivariate Regression, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Inference under Fine-Gray competing risks model with high-dimensional covariates, Consistent tuning parameter selection in high dimensional sparse linear regression, A global homogeneity test for high-dimensional linear regression, Least angle and \(\ell _{1}\) penalized regression: a review, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Greedy variance estimation for the LASSO, Consistency of Bayesian linear model selection with a growing number of parameters, Nearly unbiased variable selection under minimax concave penalty, Variable selection in nonparametric additive models, Recovery of seismic wavefields by an \(l_{q}\)-norm constrained regularization method, Semiparametric efficiency bounds for high-dimensional models, Consistent multiple changepoint estimation with fused Gaussian graphical models, Multi-Resolution Functional ANOVA for Large-Scale, Many-Input Computer Experiments, Multicarving for high-dimensional post-selection inference, Graphical-model based high dimensional generalized linear models, Ultrahigh dimensional precision matrix estimation via refitted cross validation, A review of Gaussian Markov models for conditional independence, Fuzzy Lasso regression model with exact explanatory variables and fuzzy responses, A sparse optimization problem with hybrid \(L_2\)-\(L_p\) regularization for application of magnetic resonance brain images, Model-robust subdata selection for big data, Provable training set debugging for linear regression, Feature selection for data integration with mixed multiview data, High-dimensional additive modeling, Structured estimation for the nonparametric Cox model, Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures, Minimax-optimal nonparametric regression in high dimensions, Sparse spatio-temporal autoregressions by profiling and bagging, Nonparametric and high-dimensional functional graphical models, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, Identifying small mean-reverting portfolios, Variable Selection With Second-Generation P-Values, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, An Augmented Lagrangian Method for $\ell_{1}$-Regularized Optimization Problems with Orthogonality Constraints, Simultaneous analysis of Lasso and Dantzig selector, Minimization of $L_1$ Over $L_2$ for Sparse Signal Recovery with Convergence Guarantee, Adaptive elastic net-penalized quantile regression for variable selection, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, Variable selection and forecasting via automated methods for linear models: LASSO/adaLASSO and Autometrics, Estimation and variable selection in partial linear single index models with error-prone linear covariates, Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation, Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors, Penalised robust estimators for sparse and high-dimensional linear models, On estimation error bounds of the Elastic Net when pn, A unified penalized method for sparse additive quantile models: an RKHS approach, Searching for minimal optimal neural networks, High-dimensional robust regression with \(L_q\)-loss functions, Convex biclustering, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, SCAD‐penalized quantile regression for high‐dimensional data analysis and variable selection, A new approach for ultrahigh-dimensional covariance matrix estimation, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Controlling False Discovery Rate Using Gaussian Mirrors, Correlation Tensor Decomposition and Its Application in Spatial Imaging Data, Blessing of massive scale: spatial graphical model estimation with a total cardinality constraint approach, High-dimensional generalized linear models and the lasso, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Optimal learning, A reduced half thresholding algorithm, Rotation to sparse loadings using \(L^p\) losses and related inference problems, Unnamed Item, Support union recovery in high-dimensional multivariate regression, Regression on manifolds: estimation of the exterior derivative, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, DIF statistical inference without knowing anchoring items, High-dimensional functional graphical model structure learning via neighborhood selection approach, A new hybrid \(l_p\)-\(l_2\) model for sparse solutions with applications to image processing, Recovery of partly sparse and dense signals, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, High-dimensional sparse portfolio selection with nonnegative constraint, Calibrating nonconvex penalized regression in ultra-high dimension, On the sparsity of Lasso minimizers in sparse data recovery, Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares, Unnamed Item, Estimation and variable selection with exponential weights, Cross-Validation With Confidence, Approximate Spectral Gaps for Markov Chain Mixing Times in High Dimensions, Unnamed Item, Consistent parameter estimation for Lasso and approximate message passing, ORACLE EFFICIENT VARIABLE SELECTION IN RANDOM AND FIXED EFFECTS PANEL DATA MODELS, The Lasso for High Dimensional Regression with a Possible Change Point, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Lazy lasso for local regression, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Discussion of: ``Grouping strategies and thresholding for high dimension linear models, Understanding large text corpora via sparse machine learning, Two tales of variable selection for high dimensional regression: Screening and model building, Unnamed Item, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, Information-Based Optimal Subdata Selection for Big Data Linear Regression, Lasso–type and Heuristic Strategies in Model Selection and Forecasting, Unnamed Item, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, The Convex Mixture Distribution: Granger Causality for Categorical Time Series, Stability Selection, Multiscale Change Point Inference, Variance estimation based on blocked 3×2 cross-validation in high-dimensional linear regression, Learning sparse classifiers with difference of convex functions algorithms, Penalized and ridge-type shrinkage estimators in Poisson regression model, Unnamed Item, High dimensional single index models, Variable selection in high-dimensional partly linear additive models



Cites Work