scientific article; zbMATH DE number 5957408
From MaRDI portal
Publication:3174050
zbMATH Open1222.62008MaRDI QIDQ3174050FDOQ3174050
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v7/zhao06a.html
Title of this publication is not available (Why is that?)
Recommendations
- Model selection consistency of Lasso for empirical data
- Lasso with convex loss: model selection consistency and estimation
- A note on the Lasso and related procedures in model selection
- A random model approach for the LASSO
- Strong consistency of Lasso estimators
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Regularizing LASSO: a consistent variable selection method
- Model selection consistency of \(U\)-statistics with convex loss and weighted Lasso penalty
- Model selection with mixed variables on the Lasso path
- Improving Lasso for model selection and prediction
Cited In (only showing first 100 items - show all)
- Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- The adaptive Lasso in high-dimensional sparse heteroscedastic models
- On the oracle property of adaptive group Lasso in high-dimensional linear models
- Sparse estimation via nonconcave penalized likelihood in factor analysis model
- Robust Variable and Interaction Selection for Logistic Regression and General Index Models
- Learning high-dimensional directed acyclic graphs with latent and selection variables
- Covariate assisted screening and estimation
- Group selection in high-dimensional partially linear additive models
- Adaptive Dantzig density estimation
- Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression
- Influence measures and stability for graphical models
- High dimensional discrimination analysis via a semiparametric model
- Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling
- Sub-optimality of some continuous shrinkage priors
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Bridge estimators and the adaptive Lasso under heteroscedasticity
- A note on the Lasso and related procedures in model selection
- Shrinkage estimation for identification of linear components in additive models
- Determination of vector error correction models in high dimensions
- Some theoretical results on the grouped variables Lasso
- An analysis of penalized interaction models
- Testing a single regression coefficient in high dimensional linear models
- Skinny Gibbs: a consistent and scalable Gibbs sampler for model selection
- A majorization-minimization approach to variable selection using spike and slab priors
- Rejoinder: Latent variable graphical model selection via convex optimization
- Sparse regression with exact clustering
- The Lasso problem and uniqueness
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- On constrained and regularized high-dimensional regression
- Graphical-model based high dimensional generalized linear models
- Bayesian linear regression with sparse priors
- Discussion: Latent variable graphical model selection via convex optimization
- Statistical consistency of coefficient-based conditional quantile regression
- LOL selection in high dimension
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Near-ideal model selection by \(\ell _{1}\) minimization
- Bayesian factor-adjusted sparse regression
- The lasso under Poisson-like heteroscedasticity
- ``Preconditioning for feature selection and regression in high-dimensional problems
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Sparse wavelet regression with multiple predictive curves
- QUADRO: a supervised dimension reduction method via Rayleigh quotient optimization
- On model selection consistency of regularized M-estimators
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Nonnegative-Lasso and application in index tracking
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- On the asymptotic properties of the group lasso estimator for linear models
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Preconditioning the Lasso for sign consistency
- Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models
- A Bayesian approach to sparse dynamic network identification
- Nonnegative elastic net and application in index tracking
- Meta-analysis based variable selection for gene expression data
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Rejoinder: ``A significance test for the lasso
- A note on the one-step estimator for ultrahigh dimensionality
- On the consistency of feature selection using greedy least squares regression
- Weaker regularity conditions and sparse recovery in high-dimensional regression
- Self-concordant analysis for logistic regression
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Calibrating nonconvex penalized regression in ultra-high dimension
- Bayesian hyper-Lassos with non-convex penalization
- Strong oracle optimality of folded concave penalized estimation
- An automated approach towards sparse single-equation cointegration modelling
- Sparse reduced-rank regression for multivariate varying-coefficient models
- Oracle inequalities for the lasso in the Cox model
- Penalized likelihood regression for generalized linear models with non-quadratic penalties
- Consistent group selection in high-dimensional linear regression
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Optimal variable selection in multi-group sparse discriminant analysis
- High-Dimensional Gaussian Graphical Regression Models with Covariates
- Adaptive and reversed penalty for analysis of high-dimensional correlated data
- Fitting sparse linear models under the sufficient and necessary condition for model identification
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Penalized logspline density estimation using total variation penalty
- Image denoising via solution paths
- Estimation of an oblique structure via penalized likelihood factor analysis
- Structured variable selection via prior-induced hierarchical penalty functions
- A penalized likelihood method for structural equation modeling
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Regularization and the small-ball method. I: Sparse recovery
- Parametric and semiparametric reduced-rank regression with flexible sparsity
- Model selection consistency of Lasso for empirical data
- Nonparametric independence screening via favored smoothing bandwidth
- Linear hypothesis testing in dense high-dimensional linear models
- On the post selection inference constant under restricted isometry properties
- Broken adaptive ridge regression and its asymptotic properties
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Low complexity regularization of linear inverse problems
- Discussion: Latent variable graphical model selection via convex optimization
- Sensitivity analysis for mirror-stratifiable convex functions
- Penalized Regression for Multiple Types of Many Features With Missing Data
- A tuning-free robust and efficient approach to high-dimensional regression
- Robust machine learning by median-of-means: theory and practice
- Adaptive estimation of covariance matrices via Cholesky decomposition
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3174050)