High-dimensional generalized linear models and the lasso
From MaRDI portal
Publication:2426617
DOI10.1214/009053607000000929zbMath1138.62323arXiv0804.0703OpenAlexW3102942031WikidataQ105584261 ScholiaQ105584261MaRDI QIDQ2426617
Publication date: 23 April 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0804.0703
Nonparametric regression and quantile regression (62G08) Density estimation (62G07) Generalized linear models (logistic models) (62J12)
Related Items (only showing first 100 items - show all)
On an extension of the promotion time cure model ⋮ Greedy algorithms for prediction ⋮ An introduction to recent advances in high/infinite dimensional statistics ⋮ Worst possible sub-directions in high-dimensional models ⋮ Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates ⋮ Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable ⋮ Penalized logspline density estimation using total variation penalty ⋮ Adaptive log-density estimation ⋮ Some sharp performance bounds for least squares regression with \(L_1\) regularization ⋮ Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model ⋮ GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee ⋮ Sparsity in penalized empirical risk minimization ⋮ Screening-based Bregman divergence estimation with NP-dimensionality ⋮ AIC for the Lasso in generalized linear models ⋮ Ridge regression revisited: debiasing, thresholding and bootstrap ⋮ Estimation of matrices with row sparsity ⋮ Weighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errors ⋮ Sub-optimality of some continuous shrinkage priors ⋮ A data-driven line search rule for support recovery in high-dimensional data analysis ⋮ Oracle inequalities for the lasso in the Cox model ⋮ Statistical significance in high-dimensional linear models ⋮ The Dantzig selector and sparsity oracle inequalities ⋮ Sparse recovery under matrix uncertainty ⋮ Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression ⋮ Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization ⋮ \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities ⋮ \(\ell_{1}\)-penalization for mixture regression models ⋮ Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models ⋮ Consistent group selection in high-dimensional linear regression ⋮ Robust machine learning by median-of-means: theory and practice ⋮ Profiled adaptive elastic-net procedure for partially linear models with high-dimensional covar\-i\-ates ⋮ Aggregation of estimators and stochastic optimization ⋮ Adaptive Dantzig density estimation ⋮ Group selection in high-dimensional partially linear additive models ⋮ Variable selection for sparse logistic regression ⋮ Regularizers for structured sparsity ⋮ Fixed and random effects selection in nonparametric additive mixed models ⋮ Maximum likelihood estimation in logistic regression models with a diverging number of covariates ⋮ PAC-Bayesian estimation and prediction in sparse additive models ⋮ Sparse least trimmed squares regression for analyzing high-dimensional large data sets ⋮ Sparse regression learning by aggregation and Langevin Monte-Carlo ⋮ Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization ⋮ Dimension reduction and variable selection in case control studies via regularized likelihood optimization ⋮ On the conditions used to prove oracle results for the Lasso ⋮ Self-concordant analysis for logistic regression ⋮ The Lasso as an \(\ell _{1}\)-ball model selection procedure ⋮ The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso) ⋮ Least squares after model selection in high-dimensional sparse models ⋮ Mirror averaging with sparsity priors ⋮ The log-linear group-lasso estimator and its asymptotic properties ⋮ Sign-constrained least squares estimation for high-dimensional regression ⋮ Transductive versions of the Lasso and the Dantzig selector ⋮ General nonexact oracle inequalities for classes with a subexponential envelope ⋮ Consistency of logistic classifier in abstract Hilbert spaces ⋮ Generalization of constraints for high dimensional regression problems ⋮ Oracle inequalities and optimal inference under group sparsity ⋮ Bayesian high-dimensional screening via MCMC ⋮ A systematic review on model selection in high-dimensional regression ⋮ Bayesian model selection for generalized linear models using non-local priors ⋮ Factor models and variable selection in high-dimensional regression analysis ⋮ Optimal computational and statistical rates of convergence for sparse nonconvex learning problems ⋮ A new perspective on least squares under convex constraint ⋮ High-dimensional Bayesian inference in nonparametric additive models ⋮ \(L_1\)-penalization in functional linear regression with subgaussian design ⋮ Discussion: One-step sparse estimates in nonconcave penalized likelihood models ⋮ The sparsity and bias of the LASSO selection in high-dimensional linear regression ⋮ Robust inference on average treatment effects with possibly more covariates than observations ⋮ Parallel integrative learning for large-scale multi-response regression with incomplete outcomes ⋮ High dimensional censored quantile regression ⋮ Additive model selection ⋮ Restricted strong convexity implies weak submodularity ⋮ Regularization and the small-ball method. I: Sparse recovery ⋮ Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space ⋮ Robust rank correlation based screening ⋮ Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error ⋮ Exponential screening and optimal rates of sparse estimation ⋮ Performance guarantees for individualized treatment rules ⋮ Least angle and \(\ell _{1}\) penalized regression: a review ⋮ Sure independence screening in generalized linear models with NP-dimensionality ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Greedy variance estimation for the LASSO ⋮ Nearly unbiased variable selection under minimax concave penalty ⋮ Variable selection in nonparametric additive models ⋮ SPADES and mixture models ⋮ Lasso-type recovery of sparse representations for high-dimensional data ⋮ Some theoretical results on the grouped variables Lasso ⋮ Graphical-model based high dimensional generalized linear models ⋮ Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity ⋮ Variable selection in the accelerated failure time model via the bridge method ⋮ APPLE: approximate path for penalized likelihood estimators ⋮ Sparse recovery in convex hulls via entropy penalization ⋮ SCAD-penalized regression in high-dimensional partially linear models ⋮ Elastic-net regularization in learning theory ⋮ A convex programming solution based debiased estimator for quantile with missing response and high-dimensional covariables ⋮ High-dimensional additive modeling ⋮ A sequential feature selection procedure for high-dimensional Cox proportional hazards model ⋮ Generalization error bounds of dynamic treatment regimes in penalized regression-based learning ⋮ High dimensional generalized linear models for temporal dependent data ⋮ Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates ⋮ The first-order necessary conditions for sparsity constrained optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Lasso-type recovery of sparse representations for high-dimensional data
- Relaxed Lasso
- A Bennett concentration inequality and its application to suprema of empirical processes
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Optimal aggregation of classifiers in statistical learning.
- Sparsity oracle inequalities for the Lasso
- High-dimensional graphs and variable selection with the Lasso
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- On Talagrand's deviation inequalities for product measures
- The Group Lasso for Logistic Regression
- De-noising by soft-thresholding
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution
- Convex Analysis
- Some applications of concentration inequalities to statistics
- The elements of statistical learning. Data mining, inference, and prediction
This page was built for publication: High-dimensional generalized linear models and the lasso