Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization

From MaRDI portal
Publication:1951794

DOI10.1214/08-EJS287zbMath1320.62170arXiv0808.4051OpenAlexW3105629641MaRDI QIDQ1951794

Florentina Bunea

Publication date: 24 May 2013

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0808.4051



Related Items

A general family of trimmed estimators for robust high-dimensional data analysis, Simultaneous variable selection and de-coarsening in multi-path change-point models, The information detection for the generalized additive model, Elastic-net Regularized High-dimensional Negative Binomial Regression: Consistency and Weak Signal Detection, Adaptive log-density estimation, Sliding window strategy for convolutional spike sorting with Lasso. Algorithm, theoretical guarantees and complexity, Nonlinear Variable Selection via Deep Neural Networks, Weighted Lasso estimates for sparse logistic regression: non-asymptotic properties with measurement errors, A note on the asymptotic distribution of lasso estimator for correlated data, Overlapping group lasso for high-dimensional generalized linear models, On estimation error bounds of the Elastic Net when pn, The degrees of freedom of partly smooth regularizers, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Discussion of ``Correlated variables in regression: clustering and sparse estimation, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Lasso regression in sparse linear model with \(\varphi\)-mixing errors, A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization, Variable selection for sparse logistic regression, Estimating networks with jumps, Lasso, iterative feature selection and the correlation selector: oracle inequalities and numerical performances, Dimension reduction and variable selection in case control studies via regularized likelihood optimization, Self-concordant analysis for logistic regression, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Unnamed Item, Unnamed Item, SPADES and mixture models, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, Graphical-model based high dimensional generalized linear models, Joint variable and rank selection for parsimonious estimation of high-dimensional matrices, Concentration Inequalities for Statistical Inference, New estimation and feature selection methods in mixture-of-experts models, Tuning parameter calibration for \(\ell_1\)-regularized logistic regression, On model selection consistency of regularized M-estimators, Innovated interaction screening for high-dimensional nonlinear classification, The first-order necessary conditions for sparsity constrained optimization



Cites Work