High-dimensional generalized linear models and the lasso

From MaRDI portal
Revision as of 21:20, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2426617

DOI10.1214/009053607000000929zbMath1138.62323arXiv0804.0703OpenAlexW3102942031WikidataQ105584261 ScholiaQ105584261MaRDI QIDQ2426617

Sara van de Geer

Publication date: 23 April 2008

Published in: The Annals of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0804.0703




Related Items (only showing first 100 items - show all)

On an extension of the promotion time cure modelGreedy algorithms for predictionAn introduction to recent advances in high/infinite dimensional statisticsWorst possible sub-directions in high-dimensional modelsAdaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariatesCensored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variablePenalized logspline density estimation using total variation penaltyAdaptive log-density estimationSome sharp performance bounds for least squares regression with \(L_1\) regularizationOracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity modelGSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guaranteeSparsity in penalized empirical risk minimizationScreening-based Bregman divergence estimation with NP-dimensionalityAIC for the Lasso in generalized linear modelsRidge regression revisited: debiasing, thresholding and bootstrapEstimation of matrices with row sparsityWeighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errorsSub-optimality of some continuous shrinkage priorsA data-driven line search rule for support recovery in high-dimensional data analysisOracle inequalities for the lasso in the Cox modelStatistical significance in high-dimensional linear modelsThe Dantzig selector and sparsity oracle inequalitiesSparse recovery under matrix uncertaintyAsymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regressionNon-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization\(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities\(\ell_{1}\)-penalization for mixture regression modelsRejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression modelsConsistent group selection in high-dimensional linear regressionRobust machine learning by median-of-means: theory and practiceProfiled adaptive elastic-net procedure for partially linear models with high-dimensional covar\-i\-atesAggregation of estimators and stochastic optimizationAdaptive Dantzig density estimationGroup selection in high-dimensional partially linear additive modelsVariable selection for sparse logistic regressionRegularizers for structured sparsityFixed and random effects selection in nonparametric additive mixed modelsMaximum likelihood estimation in logistic regression models with a diverging number of covariatesPAC-Bayesian estimation and prediction in sparse additive modelsSparse least trimmed squares regression for analyzing high-dimensional large data setsSparse regression learning by aggregation and Langevin Monte-CarloHonest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalizationDimension reduction and variable selection in case control studies via regularized likelihood optimizationOn the conditions used to prove oracle results for the LassoSelf-concordant analysis for logistic regressionThe Lasso as an \(\ell _{1}\)-ball model selection procedureThe adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)Least squares after model selection in high-dimensional sparse modelsMirror averaging with sparsity priorsThe log-linear group-lasso estimator and its asymptotic propertiesSign-constrained least squares estimation for high-dimensional regressionTransductive versions of the Lasso and the Dantzig selectorGeneral nonexact oracle inequalities for classes with a subexponential envelopeConsistency of logistic classifier in abstract Hilbert spacesGeneralization of constraints for high dimensional regression problemsOracle inequalities and optimal inference under group sparsityBayesian high-dimensional screening via MCMCA systematic review on model selection in high-dimensional regressionBayesian model selection for generalized linear models using non-local priorsFactor models and variable selection in high-dimensional regression analysisOptimal computational and statistical rates of convergence for sparse nonconvex learning problemsA new perspective on least squares under convex constraintHigh-dimensional Bayesian inference in nonparametric additive models\(L_1\)-penalization in functional linear regression with subgaussian designDiscussion: One-step sparse estimates in nonconcave penalized likelihood modelsThe sparsity and bias of the LASSO selection in high-dimensional linear regressionRobust inference on average treatment effects with possibly more covariates than observationsParallel integrative learning for large-scale multi-response regression with incomplete outcomesHigh dimensional censored quantile regressionAdditive model selectionRestricted strong convexity implies weak submodularityRegularization and the small-ball method. I: Sparse recoveryOracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert spaceRobust rank correlation based screeningCovariate Selection in High-Dimensional Generalized Linear Models With Measurement ErrorExponential screening and optimal rates of sparse estimationPerformance guarantees for individualized treatment rulesLeast angle and \(\ell _{1}\) penalized regression: a reviewSure independence screening in generalized linear models with NP-dimensionalityOn asymptotically optimal confidence regions and tests for high-dimensional modelsGreedy variance estimation for the LASSONearly unbiased variable selection under minimax concave penaltyVariable selection in nonparametric additive modelsSPADES and mixture modelsLasso-type recovery of sparse representations for high-dimensional dataSome theoretical results on the grouped variables LassoGraphical-model based high dimensional generalized linear modelsAggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsityVariable selection in the accelerated failure time model via the bridge methodAPPLE: approximate path for penalized likelihood estimatorsSparse recovery in convex hulls via entropy penalizationSCAD-penalized regression in high-dimensional partially linear modelsElastic-net regularization in learning theoryA convex programming solution based debiased estimator for quantile with missing response and high-dimensional covariablesHigh-dimensional additive modelingA sequential feature selection procedure for high-dimensional Cox proportional hazards modelGeneralization error bounds of dynamic treatment regimes in penalized regression-based learningHigh dimensional generalized linear models for temporal dependent dataAdaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariatesThe first-order necessary conditions for sparsity constrained optimization


Uses Software


Cites Work




This page was built for publication: High-dimensional generalized linear models and the lasso