High-dimensional generalized linear models and the lasso
From MaRDI portal
Abstract: We consider high-dimensional generalized linear models with Lipschitz loss functions, and prove a nonasymptotic oracle inequality for the empirical risk minimizer with Lasso penalty. The penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm. The examples include logistic regression, density estimation and classification with hinge loss. Least squares regression is also discussed.
Recommendations
- Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models
- Sparsity oracle inequalities for the Lasso
- Quasi-likelihood and/or robust estimation in high dimensions
- Adaptive lasso for generalized linear models with a diverging number of parameters
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 5654889 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Bennett concentration inequality and its application to suprema of empirical processes
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint
- Classifiers of support vector machine type with \(\ell_1\) complexity regularization
- Convex Analysis
- De-noising by soft-thresholding
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- High-dimensional graphs and variable selection with the Lasso
- Lasso-type recovery of sparse representations for high-dimensional data
- On Talagrand's deviation inequalities for product measures
- Optimal aggregation of classifiers in statistical learning.
- Relaxed Lasso
- Some applications of concentration inequalities to statistics
- Sparsity oracle inequalities for the Lasso
- The Group Lasso for Logistic Regression
- The elements of statistical learning. Data mining, inference, and prediction
Cited in
(only showing first 100 items - show all)- Penalised robust estimators for sparse and high-dimensional linear models
- Bayesian high-dimensional screening via MCMC
- Pivotal estimation via square-root lasso in nonparametric regression
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- General nonexact oracle inequalities for classes with a subexponential envelope
- Mirror averaging with sparsity priors
- On the prediction loss of the Lasso in the partially labeled setting
- Penalized logspline density estimation using total variation penalty
- Sparse recovery in convex hulls via entropy penalization
- Model selection and parameter estimation of a multinomial logistic regression model
- Dimension reduction and variable selection in case control studies via regularized likelihood optimization
- Generalization of constraints for high dimensional regression problems
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Forward regression for Cox models with high-dimensional covariates
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Regularization and the small-ball method. I: Sparse recovery
- Variable selection for semiparametric regression models with iterated penalisation
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- On cross-validated Lasso in high dimensions
- Bayesian model selection for generalized linear models using non-local priors
- Profiled adaptive elastic-net procedure for partially linear models with high-dimensional covar\-i\-ates
- Shrinkage and LASSO strategies in high-dimensional heteroscedastic models
- Non-asymptotic oracle inequalities for the Lasso and group Lasso in high dimensional logistic model
- Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression
- Dynamic pricing in high-dimensions
- A tuning-free robust and efficient approach to high-dimensional regression
- Cross-validation with confidence
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- High-dimensional regression and classification under a class of convex loss functions
- Divide-and-conquer for debiased \(l_1\)-norm support vector machine in ultra-high dimensions
- Maximum likelihood estimation in logistic regression models with a diverging number of covariates
- Screening-based Bregman divergence estimation with NP-dimensionality
- Fixed and random effects selection in nonparametric additive mixed models
- AIC for the Lasso in generalized linear models
- Least Ambiguous Set-Valued Classifiers With Bounded Error Levels
- Additive model selection
- On an extension of the promotion time cure model
- Sparsity in penalized empirical risk minimization
- Robust machine learning by median-of-means: theory and practice
- Estimation of matrices with row sparsity
- High-dimensional Bayesian inference in nonparametric additive models
- Adaptive log-density estimation
- Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression
- Transductive versions of the Lasso and the Dantzig selector
- Elastic-net regularization in learning theory
- Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models
- Statistical Inference for High-Dimensional Generalized Linear Models With Binary Outcomes
- A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization
- Oracle inequalities for high-dimensional prediction
- Estimation and variable selection in partial linear single index models with error-prone linear covariates
- A provable smoothing approach for high dimensional generalized regression with applications in genomics
- A new perspective on least squares under convex constraint
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- A simple method for estimating interactions between a treatment and a large number of covariates
- On asymptotically optimal confidence regions and tests for high-dimensional models
- The Group Lasso for Logistic Regression
- Adaptive Lasso estimators for ultrahigh dimensional generalized linear models
- Least squares after model selection in high-dimensional sparse models
- Sign-constrained least squares estimation for high-dimensional regression
- Exponential screening and optimal rates of sparse estimation
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Consistent group selection in high-dimensional linear regression
- Sparse least trimmed squares regression for analyzing high-dimensional large data sets
- Robust inference on average treatment effects with possibly more covariates than observations
- Stability Selection
- Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates
- Group selection in high-dimensional partially linear additive models
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- High-dimensional generalized linear models incorporating graphical structure among predictors
- APPLE: approximate path for penalized likelihood estimators
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Sparsity oracle inequalities for the Lasso
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- \(\ell_{1}\)-penalization for mixture regression models
- Factor models and variable selection in high-dimensional regression analysis
- Parallel integrative learning for large-scale multi-response regression with incomplete outcomes
- Hierarchical shrinkage priors and model fitting for high-dimensional generalized linear models
- High-dimensional additive modeling
- Adaptive Dantzig density estimation
- Oracle inequalities and optimal inference under group sparsity
- Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable
- Robust rank correlation based screening
- Performance guarantees for individualized treatment rules
- Variable selection in nonparametric additive models
- Simultaneous analysis of Lasso and Dantzig selector
- Preconditioning the Lasso for sign consistency
- Oracle inequalities for the lasso in the Cox model
- Inference in high dimensional generalized linear models based on soft thresholding
- Worst possible sub-directions in high-dimensional models
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Some theoretical results on the grouped variables Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Lasso-type recovery of sparse representations for high-dimensional data
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Nearly unbiased variable selection under minimax concave penalty
- Statistical significance in high-dimensional linear models
- An introduction to recent advances in high/infinite dimensional statistics
- Lasso and probabilistic inequalities for multivariate point processes
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error
This page was built for publication: High-dimensional generalized linear models and the lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2426617)