High-dimensional generalized linear models and the lasso

From MaRDI portal
Publication:2426617

DOI10.1214/009053607000000929zbMath1138.62323arXiv0804.0703OpenAlexW3102942031WikidataQ105584261 ScholiaQ105584261MaRDI QIDQ2426617

Sara van de Geer

Publication date: 23 April 2008

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0804.0703



Related Items

On an extension of the promotion time cure model, Greedy algorithms for prediction, An introduction to recent advances in high/infinite dimensional statistics, Worst possible sub-directions in high-dimensional models, Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Penalized logspline density estimation using total variation penalty, Adaptive log-density estimation, Some sharp performance bounds for least squares regression with \(L_1\) regularization, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee, Sparsity in penalized empirical risk minimization, Screening-based Bregman divergence estimation with NP-dimensionality, AIC for the Lasso in generalized linear models, Ridge regression revisited: debiasing, thresholding and bootstrap, Estimation of matrices with row sparsity, Weighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errors, Sub-optimality of some continuous shrinkage priors, A data-driven line search rule for support recovery in high-dimensional data analysis, Oracle inequalities for the lasso in the Cox model, Statistical significance in high-dimensional linear models, The Dantzig selector and sparsity oracle inequalities, Sparse recovery under matrix uncertainty, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, \(\ell_{1}\)-penalization for mixture regression models, Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models, Consistent group selection in high-dimensional linear regression, Robust machine learning by median-of-means: theory and practice, Profiled adaptive elastic-net procedure for partially linear models with high-dimensional covar\-i\-ates, Aggregation of estimators and stochastic optimization, Adaptive Dantzig density estimation, Group selection in high-dimensional partially linear additive models, Variable selection for sparse logistic regression, Regularizers for structured sparsity, Fixed and random effects selection in nonparametric additive mixed models, Maximum likelihood estimation in logistic regression models with a diverging number of covariates, PAC-Bayesian estimation and prediction in sparse additive models, Sparse least trimmed squares regression for analyzing high-dimensional large data sets, Sparse regression learning by aggregation and Langevin Monte-Carlo, Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization, Dimension reduction and variable selection in case control studies via regularized likelihood optimization, On the conditions used to prove oracle results for the Lasso, Self-concordant analysis for logistic regression, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Least squares after model selection in high-dimensional sparse models, Mirror averaging with sparsity priors, The log-linear group-lasso estimator and its asymptotic properties, Sign-constrained least squares estimation for high-dimensional regression, Transductive versions of the Lasso and the Dantzig selector, General nonexact oracle inequalities for classes with a subexponential envelope, Consistency of logistic classifier in abstract Hilbert spaces, Generalization of constraints for high dimensional regression problems, Oracle inequalities and optimal inference under group sparsity, Bayesian high-dimensional screening via MCMC, A systematic review on model selection in high-dimensional regression, Bayesian model selection for generalized linear models using non-local priors, Factor models and variable selection in high-dimensional regression analysis, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, A new perspective on least squares under convex constraint, High-dimensional Bayesian inference in nonparametric additive models, \(L_1\)-penalization in functional linear regression with subgaussian design, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, The sparsity and bias of the LASSO selection in high-dimensional linear regression, Robust inference on average treatment effects with possibly more covariates than observations, Parallel integrative learning for large-scale multi-response regression with incomplete outcomes, High dimensional censored quantile regression, Additive model selection, Restricted strong convexity implies weak submodularity, Regularization and the small-ball method. I: Sparse recovery, Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space, Robust rank correlation based screening, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, Exponential screening and optimal rates of sparse estimation, Performance guarantees for individualized treatment rules, Least angle and \(\ell _{1}\) penalized regression: a review, Sure independence screening in generalized linear models with NP-dimensionality, On asymptotically optimal confidence regions and tests for high-dimensional models, Greedy variance estimation for the LASSO, Nearly unbiased variable selection under minimax concave penalty, Variable selection in nonparametric additive models, SPADES and mixture models, Lasso-type recovery of sparse representations for high-dimensional data, Some theoretical results on the grouped variables Lasso, Graphical-model based high dimensional generalized linear models, Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity, Variable selection in the accelerated failure time model via the bridge method, APPLE: approximate path for penalized likelihood estimators, Sparse recovery in convex hulls via entropy penalization, SCAD-penalized regression in high-dimensional partially linear models, Elastic-net regularization in learning theory, A convex programming solution based debiased estimator for quantile with missing response and high-dimensional covariables, High-dimensional additive modeling, A sequential feature selection procedure for high-dimensional Cox proportional hazards model, Generalization error bounds of dynamic treatment regimes in penalized regression-based learning, High dimensional generalized linear models for temporal dependent data, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, The first-order necessary conditions for sparsity constrained optimization, The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models, Experimental comparison of functional and multivariate spectral-based supervised classification methods in hyperspectral image, Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection, Variable selection for semiparametric regression models with iterated penalisation, Hard Thresholding Regularised Logistic Regression: Theory and Algorithms, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Unnamed Item, Simultaneous analysis of Lasso and Dantzig selector, Oracle Estimation of a Change Point in High-Dimensional Quantile Regression, Overlapping group lasso for high-dimensional generalized linear models, Unnamed Item, Sparsest representations and approximations of an underdetermined linear system, Estimation and variable selection in partial linear single index models with error-prone linear covariates, Regularized estimation in sparse high-dimensional multivariate regression, with application to a DNA methylation study, Penalised robust estimators for sparse and high-dimensional linear models, The degrees of freedom of partly smooth regularizers, A unified penalized method for sparse additive quantile models: an RKHS approach, A robust high dimensional estimation of a finite mixture of the generalized linear model, SONIC: social network analysis with influencers and communities, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Binacox: automatic cut‐point detection in high‐dimensional Cox model with applications in genetics, Efficient multiple change point detection for high‐dimensional generalized linear models, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Debiased lasso for generalized linear models with a diverging number of covariates, Model aggregation for doubly divided data with large size and large dimension, Controlling False Discovery Rate Using Gaussian Mirrors, Treatment Effect Estimation Under Additive Hazards Models With High-Dimensional Confounding, Statistical Inference for High-Dimensional Generalized Linear Models With Binary Outcomes, Individual Data Protected Integrative Regression Analysis of High-Dimensional Heterogeneous Data, Unnamed Item, Targeting underrepresented populations in precision medicine: a federated transfer learning approach, Online inference in high-dimensional generalized linear models with streaming data, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, A dynamic screening algorithm for hierarchical binary marketing data, Unnamed Item, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, High-dimensional local linear regression under sparsity and convex losses, A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression, A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization, High-dimensional sparse portfolio selection with nonnegative constraint, Calibrating nonconvex penalized regression in ultra-high dimension, On a Semiparametric Data‐Driven Nonlinear Model with Penalized Spatio‐Temporal Lag Interactions, Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Cross-Validation With Confidence, PUlasso: High-Dimensional Variable Selection With Presence-Only Data, Unnamed Item, Pivotal estimation via square-root lasso in nonparametric regression, On b-bit min-wise hashing for large-scale regression and classification with sparse data, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, The Lasso for High Dimensional Regression with a Possible Change Point, Unnamed Item, Unnamed Item, The likelihood ratio test in high-dimensional logistic regression is asymptotically a rescaled Chi-square, Forward regression for Cox models with high-dimensional covariates, Global and local two-sample tests via regression, Endogeneity in high dimensions, Adaptive Bayesian density regression for high-dimensional data, Quasi-likelihood and/or robust estimation in high dimensions, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models, High-dimensional linear model selection motivated by multiple testing, Robust group non-convex estimations for high-dimensional partially linear models, The Group Lasso for Logistic Regression, Model selection and parameter estimation of a multinomial logistic regression model, Sharp oracle inequalities for low-complexity priors, Least trimmed squares ridge estimation in partially linear regression models, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, Least Ambiguous Set-Valued Classifiers With Bounded Error Levels, Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions, Unnamed Item, Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection, High-dimensional generalized linear models incorporating graphical structure among predictors, Spectral analysis of high-dimensional time series, Regularization in Finite Mixture of Regression Models with Diverging Number of Parameters, Global and Simultaneous Hypothesis Testing for High-Dimensional Logistic Regression Models, Unnamed Item, On Hodges' superefficiency and merits of oracle property in model selection, Unnamed Item, Stability Selection, Unnamed Item, Structured estimation for the nonparametric Cox model, Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space, Lasso and probabilistic inequalities for multivariate point processes, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, Preconditioning the Lasso for sign consistency, Unnamed Item, Innovated interaction screening for high-dimensional nonlinear classification, Sparse high-dimensional varying coefficient model: nonasymptotic minimax study


Uses Software


Cites Work