\(\ell_{1}\)-penalization for mixture regression models
From MaRDI portal
Publication:619141
DOI10.1007/s11749-010-0197-zzbMath1203.62128arXiv1202.6046OpenAlexW3101651037MaRDI QIDQ619141
Nicolas Städler, Sara van de Geer, Peter Bühlmann
Publication date: 22 January 2011
Published in: Test (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1202.6046
finite mixture modelsoracle inequalitylassohigh-dimensional estimationadaptive lassogeneralized EM algorithm
Asymptotic properties of parametric estimators (62F12) Ridge regression; shrinkage estimators (Lasso) (62J07) Generalized linear models (logistic models) (62J12) Applications of mathematical programming (90C90)
Related Items
On an extension of the promotion time cure model, Comments on: ``A random forest guided tour, Structured regularization for conditional Gaussian graphical models, Screening and clustering of sparse regressions with finite non‐Gaussian mixtures, A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions, Variable selection in finite mixture of regression models with an unknown number of components, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, Joint estimation of precision matrices in heterogeneous populations, On estimation of the diagonal elements of a sparse precision matrix, Estimating finite mixtures of ordinal graphical models, The benefit of group sparsity in group inference with de-biased scaled group Lasso, A partially linear framework for massive heterogeneous data, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Robust error density estimation in ultrahigh dimensional sparse linear model, Robust Bayesian regularized estimation based on \(t\) regression model, Bayesian variable selection for finite mixture model of linear regressions, Joint rank and variable selection for parsimonious estimation in a high-dimensional finite mixture regression model, Wavelet-based scalar-on-function finite mixture regression models, A new model selection procedure for finite mixture regression models, A globally convergent algorithm for Lasso-penalized mixture of linear regression models, A hierarchical Bayesian perspective on majorization-minimization for non-convex sparse regression: application to M/EEG source imaging, A convex optimization framework for the identification of homogeneous reaction systems, Semiparametric estimation of a two-component mixture of linear regressions in which one component is known, Penalised robust estimators for sparse and high-dimensional linear models, Covariance matrix estimation of the maximum likelihood estimator in multivariate clusterwise linear regression, Adaptive estimation in the supremum norm for semiparametric mixtures of regressions, A robust high dimensional estimation of a finite mixture of the generalized linear model, Penalized estimation in finite mixture of ultra-high dimensional regression models, Minimum distance estimation in a finite mixture regression model, Model-based regression clustering for high-dimensional data: application to functional data, Inverse regression approach to robust nonlinear high-to-low dimensional mapping, Histopathological imaging‐based cancer heterogeneity analysis via penalized fusion with model averaging, Hierarchical cancer heterogeneity analysis based on histopathological imaging features, Variance estimation in high-dimensional linear regression via adaptive elastic-net, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Regression‐based heterogeneity analysis to identify overlapping subgroup structure in high‐dimensional data, Regularization in dynamic random‐intercepts models for analysis of longitudinal data, Heterogeneity Analysis via Integrating Multi-Sources High-Dimensional Data With Applications to Cancer Studies, Modelling Clustered Heterogeneity: Fixed Effects, Random Effects and Mixtures, Bayesian variable selection in a finite mixture of linear mixed-effects models, Mixture of inhomogeneous matrix models for species‐rich ecosystems, Modeling cell populations measured by flow cytometry with covariates using sparse mixture of regressions, Finite mixture regression: a sparse variable selection by model selection for clustering, SLOPE-adaptive variable selection via convex optimization, LASSO–penalized clusterwise linear regression modelling: a two–step approach, Adapting to unknown noise level in sparse deconvolution, Penalized estimation in high-dimensional hidden Markov models with state-specific graphical models, Variable selection approach for zero-inflated count data via adaptive lasso, Ruin probabilities in the mixed claim frequency risk models, Prediction with a flexible finite mixture-of-regressions, Outlier detection and robust mixture modeling using nonconvex penalized likelihood, In the pursuit of sparseness: a new rank-preserving penalty for a finite mixture of factor analyzers, Prediction of the Nash through penalized mixture of logistic regression models, Debiasing the Lasso: optimal sample size for Gaussian designs, Robust variable selection for finite mixture regression models, Robust subspace clustering, Pivotal estimation via square-root lasso in nonparametric regression, Nonconcave penalized composite conditional likelihood estimation of sparse Ising models, High-dimensional integrative analysis with homogeneity and sparsity recovery, Missing values: sparse inverse covariance estimation and an extension to sparse regression, SOCP based variance free Dantzig selector with application to robust estimation, Anℓ1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models, Endogeneity in high dimensions, Perspective maximum likelihood-type estimation via proximal decomposition, Pursuing Sources of Heterogeneity in Modeling Clustered Population, Robust finite mixture regression for heterogeneous targets, Quasi-likelihood and/or robust estimation in high dimensions, High-dimensional regression with unknown variance, A general theory of concave regularization for high-dimensional sparse estimation problems, Tree-Structured Clustering in Fixed Effects Models, Maximin effects in inhomogeneous large-scale data, Variance prior forms for high-dimensional Bayesian variable selection, A latent discrete Markov random field approach to identifying and classifying historical forest communities based on spatial multivariate tree species counts, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Multi-species distribution modeling using penalized mixture of regressions, Model selection in finite mixture of regression models: a Bayesian approach with innovative weightedgpriors and reversible jump Markov chain Monte Carlo implementation, Compound Poisson point processes, concentration and oracle inequalities, Non-concave penalization in linear mixed-effect models and regularized selection of fixed effects, A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Identification of sparse FIR systems using a general quantisation scheme, Bayesian variable selection in linear regression models with non-normal errors, Regularization in Finite Mixture of Regression Models with Diverging Number of Parameters, Stochastic proximal-gradient algorithms for penalized mixed models, A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector and sparsity oracle inequalities
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Near-ideal model selection by \(\ell _{1}\) minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- A coordinate gradient descent method for nonsmooth separable minimization
- Lasso-type recovery of sparse representations for high-dimensional data
- Fitting finite mixtures of generalized linear regressions in \textsf{R}
- On the convergence properties of the EM algorithm
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Optimal aggregation of classifiers in statistical learning.
- Weak convergence and empirical processes. With applications to statistics
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Pathwise coordinate optimization
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Decoding by Linear Programming
- The Group Lasso for Logistic Regression
- Variable Selection in Finite Mixture of Regression Models
- The Bayesian Lasso
- Asymptotic Statistics
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- On Recovery of Sparse Signals Via $\ell _{1}$ Minimization
- Stable Recovery of Sparse Signals and an Oracle Inequality
- Convergence of a block coordinate descent method for nondifferentiable minimization