Simultaneous analysis of Lasso and Dantzig selector
From MaRDI portal
Publication:2388978
DOI10.1214/08-AOS620zbMath1173.62022arXiv0801.1095OpenAlexW2116581043WikidataQ100786247 ScholiaQ100786247MaRDI QIDQ2388978
Alexandre B. Tsybakov, Ya'acov Ritov, Peter J. Bickel
Publication date: 22 July 2009
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0801.1095
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Linear regression; mixed models (62J05) Prediction theory (aspects of stochastic processes) (60G25)
Related Items
Worst possible sub-directions in high-dimensional models, Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling, Regularity properties for sparse regression, Best subset selection via a modern optimization lens, An analysis of penalized interaction models, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Minimum distance Lasso for robust high-dimensional regression, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Asymptotic properties of lasso in high-dimensional partially linear models, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Geometric inference for general high-dimensional linear inverse problems, Solution of linear ill-posed problems using overcomplete dictionaries, Oracle inequalities, variable selection and uniform inference in high-dimensional correlated random effects panel data models, Econometric estimation with high-dimensional moment equalities, Conjugate gradient acceleration of iteratively re-weighted least squares methods, Group-wise semiparametric modeling: a SCSE approach, Estimation of matrices with row sparsity, Asymtotics of Dantzig selector for a general single-index model, Sharp MSE bounds for proximal denoising, The \(l_q\) consistency of the Dantzig selector for Cox's proportional hazards model, On the stability of sparse convolutions, Strong consistency of Lasso estimators, Recovery of high-dimensional sparse signals via \(\ell_1\)-minimization, Oracle inequalities for the lasso in the Cox model, Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap, Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness, Statistical significance in high-dimensional linear models, The geometry of least squares in the 21st century, The Dantzig selector and sparsity oracle inequalities, Sparse recovery under matrix uncertainty, Nearly optimal minimax estimator for high-dimensional sparse linear regression, Impacts of high dimensionality in finite samples, A partial overview of the theory of statistics with functional data, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, The \(L_1\) penalized LAD estimator for high dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Adjusted regularized estimation in the accelerated failure time model with high dimensional covariates, Correlated variables in regression: clustering and sparse estimation, Grouping strategies and thresholding for high dimensional linear models, High-dimensional covariance matrix estimation with missing observations, Regularized 3D functional regression for brain image data via Haar wavelets, \(\ell_{1}\)-penalization for mixture regression models, Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models, Sparsity in multiple kernel learning, Phase transition in limiting distributions of coherence of high-dimensional random matrices, Shrinkage estimation for identification of linear components in additive models, Adaptive Dantzig density estimation, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, On verifiable sufficient conditions for sparse signal recovery via \(\ell_{1}\) minimization, Non-convex penalized estimation in high-dimensional models with single-index structure, Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices, A majorization-minimization approach to variable selection using spike and slab priors, Sparse regression learning by aggregation and Langevin Monte-Carlo, Mirror averaging with sparsity priors, Transductive versions of the Lasso and the Dantzig selector, General nonexact oracle inequalities for classes with a subexponential envelope, Regularization for Cox's proportional hazards model with NP-dimensionality, High-dimensional Cox regression analysis in genetic studies with censored survival outcomes, Variable selection in infinite-dimensional problems, Estimation of high-dimensional partially-observed discrete Markov random fields, A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A new perspective on least squares under convex constraint, The horseshoe estimator: posterior concentration around nearly black vectors, \(L_1\)-penalization in functional linear regression with subgaussian design, On higher order isotropy conditions and lower bounds for sparse quadratic forms, Comment on ``Hypothesis testing by convex optimization, Normalized and standard Dantzig estimators: two approaches, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Oracle inequalities for high dimensional vector autoregressions, Weighted \(\ell_1\)-penalized corrected quantile regression for high dimensional measurement error models, Robust inference on average treatment effects with possibly more covariates than observations, Cox process functional learning, Estimation of average treatment effects with panel data: asymptotic theory and implementation, Additive model selection, Sparse recovery under weak moment assumptions, A Cluster Elastic Net for Multivariate Regression, Group Inference in High Dimensions with Applications to Hierarchical Testing, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, Exponential screening and optimal rates of sparse estimation, Estimation of high-dimensional low-rank matrices, Estimation of (near) low-rank matrices with noise and high-dimensional scaling, Performance guarantees for individualized treatment rules, Model-assisted inference for treatment effects using regularized calibrated estimation with high-dimensional data, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Regularized estimation in sparse high-dimensional time series models, Distribution-Free Predictive Inference For Regression, Selection by partitioning the solution paths, Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements, Regularizing Double Machine Learning in Partially Linear Endogenous Models, A unified approach to model selection and sparse recovery using regularized least squares, Geometric median and robust estimation in Banach spaces, Factor-Adjusted Regularized Model Selection, Sparse Sliced Inverse Regression Via Lasso, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Gaussian graphical model estimation with false discovery rate control, Signal extraction approach for sparse multivariate response regression, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery, On cross-validated Lasso in high dimensions, Statistical inference in sparse high-dimensional additive models, Lasso-driven inference in time and space, Nonnegative-Lasso and application in index tracking, Significance testing in non-sparse high-dimensional linear models, On the prediction loss of the Lasso in the partially labeled setting, Fitting sparse linear models under the sufficient and necessary condition for model identification, A linear programming model for selection of sparse high-dimensional multiperiod portfolios, High dimensional regression for regenerative time-series: an application to road traffic modeling, Model selection consistency of Lasso for empirical data, A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Signal recovery under cumulative coherence, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Near-ideal model selection by \(\ell _{1}\) minimization, High-dimensional \(A\)-learning for optimal dynamic treatment regimes, Are discoveries spurious? Distributions of maximum spurious correlations and their applications, Variational multiscale nonparametric regression: smooth functions, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, High dimensional Gaussian copula graphical model with FDR control, Trace regression model with simultaneously low rank and row(column) sparse parameter, Structured variable selection via prior-induced hierarchical penalty functions, Variable selection for high dimensional Gaussian copula regression model: an adaptive hypothesis testing procedure, Balanced estimation for high-dimensional measurement error models, Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression, A doubly sparse approach for group variable selection, LASSO estimation of threshold autoregressive models, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, Bayesian linear regression with sparse priors, Controlling the false discovery rate via knockoffs, Efficient nonconvex sparse group feature selection via continuous and discrete optimization, Model selection and structure specification in ultra-high dimensional generalised semi-varying coefficient models, Lasso-type estimators for semiparametric nonlinear mixed-effects models estimation, Estimator selection: a new method with applications to kernel density estimation, Stability of the elastic net estimator, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Sampling in the analysis transform domain, Nonconvex penalized reduced rank regression and its oracle properties in high dimensions, Finite mixture regression: a sparse variable selection by model selection for clustering, A Rice method proof of the null-space property over the Grassmannian, Estimating a sparse reduction for general regression in high dimensions, Inferring large graphs using \(\ell_1\)-penalized likelihood, A group adaptive elastic-net approach for variable selection in high-dimensional linear regression, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, Linear regression with sparsely permuted data, Estimator selection with respect to Hellinger-type risks, Generalization of constraints for high dimensional regression problems, An analysis of the SPARSEVA estimate for the finite sample data case, Oracle inequalities and optimal inference under group sparsity, A two-stage regularization method for variable selection and forecasting in high-order interaction model, Support vector machines with a reject option, A systematic review on model selection in high-dimensional regression, Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion, Factor models and variable selection in high-dimensional regression analysis, Nonconvex penalized ridge estimations for partially linear additive models in ultrahigh dimension, Signal recovery under mutual incoherence property and oracle inequalities, On model selection from a finite family of possibly misspecified time series models, Generalized M-estimators for high-dimensional Tobit I models, Isotonic regression meets Lasso, Determination of vector error correction models in high dimensions, Oracle inequalities for high-dimensional prediction, Variable screening for high dimensional time series, Improved bounds for square-root Lasso and square-root slope, A strong converse bound for multiple hypothesis testing, with applications to high-dimensional estimation, Pathwise coordinate optimization for sparse learning: algorithm and theory, High dimensional censored quantile regression, Bayesian estimation of sparse signals with a continuous spike-and-slab prior, Column normalization of a random measurement matrix, Regularization and the small-ball method. I: Sparse recovery, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Sparse inference of the drift of a high-dimensional Ornstein-Uhlenbeck process, Optimal estimation of slope vector in high-dimensional linear transformation models, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, SOCP based variance free Dantzig selector with application to robust estimation, High-dimensional Gaussian model selection on a Gaussian design, The benefit of group sparsity, SPADES and mixture models, Solution of linear ill-posed problems using random dictionaries, Time-varying Lasso, Sparse factor regression via penalized maximum likelihood estimation, On the sensitivity of the Lasso to the number of predictor variables, Sliding-MOMP based channel estimation scheme for ISDB-T systems, Lasso-type recovery of sparse representations for high-dimensional data, Some theoretical results on the grouped variables Lasso, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Joint variable and rank selection for parsimonious estimation of high-dimensional matrices, Fast global convergence of gradient methods for high-dimensional statistical recovery, Regularized rank-based estimation of high-dimensional nonparanormal graphical models, Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals, Model selection by LASSO methods in a change-point model, Penalized estimation in additive varying coefficient models using grouped regularization, Prediction-based regularization using data augmented regression, Spline estimator for simultaneous variable selection and constant coefficient identification in high-dimensional generalized varying-coefficient models, A simple forward selection procedure based on false discovery rate control, Sparse recovery in convex hulls via entropy penalization, The \(\ell_{2,q}\) regularized group sparse optimization: lower bound theory, recovery bound and algorithms, High-dimensional additive modeling, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Fundamental barriers to high-dimensional regression with convex penalties, Canonical thresholding for nonsparse high-dimensional linear regression, Penalized and constrained LAD estimation in fixed and high dimension, Unconstrained \(\ell_1\)-\(\ell_2\) minimization for sparse recovery via mutual coherence, Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems, Testability of high-dimensional linear models with nonsparse structures, Adaptive estimation in multivariate response regression with hidden variables, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors, The variable selection by the Dantzig selector for Cox's proportional hazards model, On robust learning in the canonical change point problem under heavy tailed errors in finite and growing dimensions, Concentration inequalities for non-causal random fields, High-dimensional sufficient dimension reduction through principal projections, Penalized estimation of threshold auto-regressive models with many components and thresholds, De-biasing the Lasso with degrees-of-freedom adjustment, Sliding window strategy for convolutional spike sorting with Lasso. Algorithm, theoretical guarantees and complexity, Adaptive Huber regression on Markov-dependent data, Extreme eigenvalues of nonlinear correlation matrices with applications to additive models, Doubly debiased Lasso: high-dimensional inference under hidden confounding, Ridge regression revisited: debiasing, thresholding and bootstrap, Penalized least square in sparse setting with convex penalty and non Gaussian errors, Weighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errors, On LASSO for predictive regression, Bayesian factor-adjusted sparse regression, A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations, Large-scale multivariate sparse regression with applications to UK Biobank, How can we identify the sparsity structure pattern of high-dimensional data: an elementary statistical analysis to interpretable machine learning, On Kendall's regression, The convex geometry of linear inverse problems, Hierarchical inference for genome-wide association studies: a view on methodology with software, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, Robust estimation for an inverse problem arising in multiview geometry, Variable selection and parameter estimation for partially linear models via Dantzig selector, Needles and straw in a haystack: posterior concentration for possibly sparse sequences, Regularizers for structured sparsity, Minimax risks for sparse regressions: ultra-high dimensional phenomenons, Statistical multiresolution Dantzig estimation in imaging: fundamental concepts and algorithmic framework, Theoretical properties of the overlapping groups Lasso, High-dimensional additive hazards models and the lasso, Detection of sparse additive functions, PAC-Bayesian estimation and prediction in sparse additive models, The Lasso problem and uniqueness, On the asymptotic properties of the group lasso estimator for linear models, Lasso, iterative feature selection and the correlation selector: oracle inequalities and numerical performances, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Selection of variables and dimension reduction in high-dimensional non-parametric regression, On the conditions used to prove oracle results for the Lasso, Self-concordant analysis for logistic regression, MAP model selection in Gaussian regression, Detection boundary in sparse regression, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, Sign-constrained least squares estimation for high-dimensional regression, Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs, Slope meets Lasso: improved oracle bounds and optimality, Debiasing the Lasso: optimal sample size for Gaussian designs, The landscape of empirical risk for nonconvex losses, Overcoming the limitations of phase transition by higher order analysis of regularization techniques, Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates, Inference under Fine-Gray competing risks model with high-dimensional covariates, Parametric and semiparametric reduced-rank regression with flexible sparsity, Greedy variance estimation for the LASSO, Elastic net penalized quantile regression model, Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations, Bayesian MIDAS penalized regressions: estimation, selection, and prediction, Simultaneous feature selection and clustering based on square root optimization, An efficient algorithm for joint feature screening in ultrahigh-dimensional Cox's model, Robust high-dimensional factor models with applications to statistical machine learning, Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions, On the exponentially weighted aggregate with the Laplace prior, Multicarving for high-dimensional post-selection inference, The de-biased group Lasso estimation for varying coefficient models, Evaluating visual properties via robust HodgeRank, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Control variate selection for Monte Carlo integration, Sampling from non-smooth distributions through Langevin diffusion, Compound Poisson point processes, concentration and oracle inequalities, Oracle inequalities for weighted group Lasso in high-dimensional misspecified Cox models, Optimal linear discriminators for the discrete choice model in growing dimensions, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, In defense of the indefensible: a very naïve approach to high-dimensional inference, Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space, Robust sparse recovery via a novel convex model, Quantile regression feature selection and estimation with grouped variables using Huber approximation, Network differential connectivity analysis, Group penalized quantile regression, On the robustness of minimum norm interpolators and regularized empirical risk minimizers, GenMod: a generative modeling approach for spectral representation of PDEs with random inputs, Multivariate sparse Laplacian shrinkage for joint estimation of two graphical structures, Conditional rotation between forecasting models, High dimensional generalized linear models for temporal dependent data, On consistency and sparsity for high-dimensional functional time series with application to autoregressions, Design of c-optimal experiments for high-dimensional linear models, Likelihood estimation of sparse topic distributions in topic models and its applications to Wasserstein document distance calculations, High-dimensional linear models with many endogenous variables, Conditional sure independence screening by conditional marginal empirical likelihood, High-dimensional tests for functional networks of brain anatomic regions, Variable selection and structure identification for varying coefficient Cox models, Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study, Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors, The degrees of freedom of partly smooth regularizers, Accuracy assessment for high-dimensional linear regression, L1-norm-based principal component analysis with adaptive regularization, Structure learning of sparse directed acyclic graphs incorporating the scale-free property, Predictor ranking and false discovery proportion control in high-dimensional regression, Inference for high-dimensional instrumental variables regression, Learning rates for partially linear functional models with high dimensional scalar covariates, A simple homotopy proximal mapping algorithm for compressive sensing, Scalable interpretable learning for multi-response error-in-variables regression, Prediction error after model search, \(\alpha\)-variational inference with statistical guarantees, Robust machine learning by median-of-means: theory and practice, Lasso guarantees for \(\beta \)-mixing heavy-tailed time series, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Consistency of \(\ell_1\) penalized negative binomial regressions, Support union recovery in high-dimensional multivariate regression, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Variable selection for sparse logistic regression, Multi-stage convex relaxation for feature selection, Detection of a sparse submatrix of a high-dimensional noisy matrix, RIPless compressed sensing from anisotropic measurements, On a unified view of nullspace-type conditions for recoveries associated with general sparsity structures, Calibrating nonconvex penalized regression in ultra-high dimension, Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors, Confidence sets in sparse regression, Model selection for high-dimensional linear regression with dependent observations, Convergence rates of variational posterior distributions, A general framework for Bayes structured linear models, Asymptotic risk and phase transition of \(l_1\)-penalized robust estimator, Aggregation of affine estimators, Estimation and variable selection with exponential weights, Statistical inference in compound functional models, Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood, On Dantzig and Lasso estimators of the drift in a high dimensional Ornstein-Uhlenbeck model, Finite-sample analysis of \(M\)-estimators using self-concordance, Adaptive robust variable selection, Sparse identification of truncation errors, On the uniform convergence of empirical norms and inner products, with application to causal inference, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Finite sample performance of linear least squares estimation, Robust low-rank multiple kernel learning with compound regularization, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, Pivotal estimation via square-root lasso in nonparametric regression, Lasso with long memory regression errors, Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO, Robust dequantized compressive sensing, Sparse and efficient estimation for partial spline models with increasing dimension, Sparse semiparametric discriminant analysis, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Sparse distance metric learning, Sparse trace norm regularization, A global homogeneity test for high-dimensional linear regression, Prediction error bounds for linear regression with the TREX, Boosting with structural sparsity: a differential inclusion approach, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Sorted concave penalized regression, Strong oracle optimality of folded concave penalized estimation, Endogeneity in high dimensions, Instrumental variables estimation with many weak instruments using regularized JIVE, Leave-one-out cross-validation is risk consistent for Lasso, Robust finite mixture regression for heterogeneous targets, On the differences between \(L_2\) boosting and the Lasso, QUADRO: a supervised dimension reduction method via Rayleigh quotient optimization, Structured analysis of the high-dimensional FMR model, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, Regularization methods for high-dimensional sparse control function models, High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking, Selective inference via marginal screening for high dimensional classification, Sharp oracle inequalities for low-complexity priors, Computational and statistical analyses for robust non-convex sparse regularized regression problem, Deviation inequalities for separately Lipschitz functionals of composition of random functions, Sparse Poisson regression with penalized weighted score function, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, On Lasso refitting strategies, Adaptively weighted group Lasso for semiparametric quantile regression models, On the asymptotic variance of the debiased Lasso, Posterior asymptotic normality for an individual coordinate in high-dimensional linear regression, High-dimensional generalized linear models incorporating graphical structure among predictors, Doubly penalized estimation in additive regression with high-dimensional data, Projected spline estimation of the nonparametric function in high-dimensional partially linear models for massive data, Joint feature screening for ultra-high-dimensional sparse additive hazards model by the sparsity-restricted pseudo-score estimator, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, Non-separable models with high-dimensional data, The Dantzig selector for a linear model of diffusion processes, Weaker regularity conditions and sparse recovery in high-dimensional regression, Structured estimation for the nonparametric Cox model, Stable recovery of low rank matrices from nuclear norm minimization, Minimax-optimal nonparametric regression in high dimensions, Near oracle performance and block analysis of signal space greedy methods, Lasso and probabilistic inequalities for multivariate point processes, Preconditioning the Lasso for sign consistency, Sparse learning via Boolean relaxations, High dimensional single index models, Innovated interaction screening for high-dimensional nonlinear classification, Sparse high-dimensional varying coefficient model: nonasymptotic minimax study, High-Dimensional Vector Autoregressive Time Series Modeling via Tensor Decomposition, Learning Latent Factors From Diversified Projections and Its Applications to Over-Estimated and Weak Factors, Compressed data separation via unconstrained l1-split analysis, Unnamed Item, Unnamed Item, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Hypothesis Testing in High-Dimensional Instrumental Variables Regression With an Application to Genomics Data, Overlapping group lasso for high-dimensional generalized linear models, Adaptive elastic net-penalized quantile regression for variable selection, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, On estimation error bounds of the Elastic Net when p ≫ n, Long‐term prediction intervals with many covariates, Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Difference-of-Convex Learning: Directional Stationarity, Optimality, and Sparsity, High-Dimensional Factor Regression for Heterogeneous Subpopulations, Statistical Inference for High-Dimensional Models via Recursive Online-Score Estimation, High-dimensional latent panel quantile regression with an application to asset pricing, Multiple change points detection in high-dimensional multivariate regression, Rejoinder to “Reader reaction to ‘Outcome‐adaptive Lasso: Variable selection for causal inference’ by Shortreed and Ertefaie (2017)”, A structured brain‐wide and genome‐wide association study using ADNI PET images, Variance estimation in high-dimensional linear regression via adaptive elastic-net, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, Unconditional quantile regression with high‐dimensional data, Hybrid Hard-Soft Screening for High-dimensional Latent Class Analysis, A Unified Framework for Change Point Detection in High-Dimensional Linear Models, A Scale-Free Approach for False Discovery Rate Control in Generalized Linear Models, Inference for High-Dimensional Exchangeable Arrays, Nonparametric Functional Graphical Modeling Through Functional Additive Regression Operator, Orthogonalized Kernel Debiased Machine Learning for Multimodal Data Analysis, High-Dimensional Gaussian Graphical Regression Models with Covariates, Frequentist Model Averaging for Undirected Gaussian Graphical Models, L 0 -regularization for high-dimensional regression with corrupted data, Integrative sparse reduced-rank regression via orthogonal rotation for analysis of high-dimensional multi-source data, Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method, Analysis of sparse recovery for Legendre expansions using envelope bound, Penetrating sporadic return predictability, Time-varying forecast combination for high-dimensional data, Semiparametric estimation of long-term treatment effects, Sparse and robust estimation with ridge minimax concave penalty, Adaptive Lasso and Dantzig selector for spatial point processes intensity estimation, Model selection in high-dimensional noisy data: a simulation study, Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage, Scalable and efficient inference via CPE, Testing stochastic dominance with many conditioning variables, Joint Structural Break Detection and Parameter Estimation in High-Dimensional Nonstationary VAR Models, Unnamed Item, Unnamed Item, Grouped penalization estimation of the osteoporosis data in the traditional Chinese medicine, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, Online Decision Making with High-Dimensional Covariates, Multi-Armed Angle-Based Direct Learning for Estimating Optimal Individualized Treatment Rules With Various Outcomes, On the finite-sample analysis of \(\Theta\)-estimators, Independently Interpretable Lasso for Generalized Linear Models, Robust recovery of signals with partially known support information using weighted BPDN, Communication-efficient estimation of high-dimensional quantile regression, Generalized Regression Estimators with High-Dimensional Covariates, On the finite-sample analysis of \(\Theta\)-estimators, A Bayesian approach for the segmentation of series with a functional effect, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Fixed Effects Testing in High-Dimensional Linear Mixed Models, Cross-Validation With Confidence, Oracle inequalities for the Lasso in the additive hazards model with interval-censored data, Consistent parameter estimation for Lasso and approximate message passing, A significance test for graph‐constrained estimation, A polynomial algorithm for best-subset selection problem, The Lasso for High Dimensional Regression with a Possible Change Point, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Ridge regression and asymptotic minimax estimation over spheres of growing dimension, Functional linear regression with points of impact, Inference for biased models: a quasi-instrumental variable approach, Structured sparsity through convex optimization, Quasi-likelihood and/or robust estimation in high dimensions, A selective review of group selection in high-dimensional models, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Sparse estimation by exponential weighting, A general theory of concave regularization for high-dimensional sparse estimation problems, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Randomized maximum-contrast selection: subagging for large-scale regression, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, High-dimensional linear model selection motivated by multiple testing, Comments on: \(\ell _{1}\)-penalization for mixture regression models, l1-Penalised Ordinal Polytomous Regression Estimators with Application to Gene Expression Studies, Sparse recovery from extreme eigenvalues deviation inequalities, Robust Wasserstein profile inference and applications to machine learning, Simulation-based Value-at-Risk for nonlinear portfolios, Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension, Unnamed Item, Unnamed Item, Unnamed Item, Comments on: ``High-dimensional simultaneous inference with the bootstrap, The Partial Linear Model in High Dimensions, Unnamed Item, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, The Penalized Analytic Center Estimator, Estimation of Sparse Structural Parameters with Many Endogenous Variables, High-dimensional statistical inference via DATE, Investigating competition in financial markets: a sparse autologistic model for dynamic network data, Partitioned Approach for High-dimensional Confidence Intervals with Large Split Sizes, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection, Finite-sample results for lasso and stepwise Neyman-orthogonal Poisson estimators, Poisson Regression With Error Corrupted High Dimensional Features, Low-Rank and Sparse Multi-task Learning, L0-Regularized Learning for High-Dimensional Additive Hazards Regression, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Classification of longitudinal data through a semiparametric mixed‐effects model based on lasso‐type estimators, Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Identification of Partially Linear Structure in Additive Models with an Application to Gene Expression Prediction from Sequences, Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing, Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication, Sparse reduced-rank regression for multivariate varying-coefficient models, Modeling gene-covariate interactions in sparse regression with group structure for genome-wide association studies, Group variable selection via convex log‐exp‐sum penalty with application to a breast cancer survivor study, Tilting Methods for Assessing the Influence of Components in a Classifier, False Discovery Rate Smoothing, Oracle Estimation of a Change Point in High-Dimensional Quantile Regression, Unnamed Item, Unnamed Item, On the optimality of sliced inverse regression in high dimensions, Convergence of covariance and spectral density estimates for high-dimensional locally stationary processes, Estimation and variable selection in partial linear single index models with error-prone linear covariates, The Sparse MLE for Ultrahigh-Dimensional Feature Screening, Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation, Penalised robust estimators for sparse and high-dimensional linear models, Multicompartment magnetic resonance fingerprinting, Zeros of optimal polynomial approximants in \(\ell_A^p\), SONIC: social network analysis with influencers and communities, High-dimensional robust regression with \(L_q\)-loss functions, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Lasso regression in sparse linear model with \(\varphi\)-mixing errors, A resampling approach for confidence intervals in linear time-series models after model selection, Unnamed Item, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, High-dimensional VARs with common factors, Unnamed Item, High Dimensional Change Point Estimation via Sparse Projection, Goodness-of-Fit Tests for High Dimensional Linear Models, Recovery of partly sparse and dense signals, Stable low-rank matrix recovery via null space properties, Efficient regularization with wavelet sparsity constraints in photoacoustic tomography, On the sparsity of Lasso minimizers in sparse data recovery, Hard thresholding regression, Oracle Inequalities for Local and Global Empirical Risk Minimizers, DASSO: Connections Between the Dantzig Selector and Lasso, A statistical mechanics approach to de-biasing and uncertainty estimation in LASSO for random measurements, Adaptive Huber Regression, RANK: Large-Scale Inference With Graphical Nonlinear Knockoffs, Greedy forward regression for variable screening, Statistical challenges of high-dimensional data, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, A Tight Bound of Hard Thresholding, Likelihood-Based Selection and Sharp Parameter Estimation, Structured, Sparse Aggregation, Unnamed Item, High-Dimensional Cox Models: The Choice of Penalty as Part of the Model Building Process, Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix, Anℓ1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Group Regularized Estimation Under Structural Hierarchy, Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models, A study on tuning parameter selection for the high-dimensional lasso, Concentration Inequalities for Statistical Inference, Group LASSO for Structural Break Time Series, GENERALIZED ADDITIVE PARTIAL LINEAR MODELS WITH HIGH-DIMENSIONAL COVARIATES, Structured random measurements in signal processing, Weak Convergence of the Regularization Path in Penalized M-Estimation, The Dantzig Selector in Cox's Proportional Hazards Model, Sharp Oracle Inequalities for Square Root Regularization, On Computationally Tractable Selection of Experiments in Measurement-Constrained Regression Models, Regularization and the small-ball method II: complexity dependent error rates, Unnamed Item, Unnamed Item, One-Bit Compressed Sensing by Linear Programming, Nonsparse Learning with Latent Variables, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, The Convex Mixture Distribution: Granger Causality for Categorical Time Series, On Cross-Validation for Sparse Reduced Rank Regression, Linear Hypothesis Testing in Dense High-Dimensional Linear Models, A NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMING, Parallelism, uniqueness, and large-sample asymptotics for the Dantzig selector, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Stability Selection, The Adaptive Gril Estimator with a Diverging Number of Parameters, Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space, Using Machine Learning Methods to Support Causal Inference in Econometrics, Robust Width: A Characterization of Uniformly Stable and Robust Compressed Sensing, Efficient Sparse Hessian-Based Semismooth Newton Algorithms for Dantzig Selector, Unnamed Item, Variable selection in high-dimensional partly linear additive models, Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, Oracle Efficient Estimation of Structural Breaks in Cointegrating Regressions, Shrinkage estimation of multiple threshold factor models, Testing Mediation Effects Using Logic of Boolean Matrices, Debiasing convex regularized estimators and interval estimation in linear models, Adaptive denoising of signals with local shift-invariant structure, Weighted l1‐Penalized Corrected Quantile Regression for High‐Dimensional Temporally Dependent Measurement Errors, Targeting underrepresented populations in precision medicine: a federated transfer learning approach, Lasso in Infinite dimension: application to variable selection in functional multivariate linear regression, LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, High-Dimensional Censored Regression via the Penalized Tobit Likelihood, Sparse generalized Yule-Walker estimation for large spatio-temporal autoregressions with an application to NO\(_2\) satellite data, The nonparametric Box-Cox model for high-dimensional regression analysis, Optimal learning, Nearly Dimension-Independent Sparse Linear Bandit over Small Action Spaces via Best Subset Selection, Subspace Estimation with Automatic Dimension and Variable Selection in Sufficient Dimension Reduction, Forward-selected panel data approach for program evaluation, Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit, High-dimensional inference robust to outliers with ℓ1-norm penalization, Variable selection and regularization via arbitrary rectangle-range generalized elastic net, A unified precision matrix estimation framework via sparse column-wise inverse operator under weak sparsity, Relative Lipschitz-like Property of Parametric Systems via Projectional Coderivatives, On Lasso and Slope drift estimators for Lévy-driven Ornstein-Uhlenbeck processes, Concentration of measure bounds for matrix-variate data with missing values, Non-asymptotic bounds for the \(\ell_{\infty}\) estimator in linear regression with uniform noise, The Lasso with general Gaussian designs with applications to hypothesis testing, High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms, Rank-Based Greedy Model Averaging for High-Dimensional Survival Data, -Penalized Pairwise Difference Estimation for a High-Dimensional Censored Regression Model, Culling the Herd of Moments with Penalized Empirical Likelihood, Homogeneity and Sparsity Analysis for High-Dimensional Panel Data Models, Expectile trace regression via low-rank and group sparsity regularization, An alternative to synthetic control for models with many covariates under sparsity, Estimation and inference in sparse multivariate regression and conditional Gaussian graphical models under an unbalanced distributed setting, Nonparametric Causal Effects Based on Longitudinal Modified Treatment Policies, Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Dantzig selector and sparsity oracle inequalities
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Functional aggregation for nonparametric regression.
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Aggregation for Gaussian regression
- Pathwise coordinate optimization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Stable recovery of sparse overcomplete representations in the presence of noise
- The Group Lasso for Logistic Regression
- A new approach to variable selection in least squares problems
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Sparse Density Estimation with ℓ1 Penalties