scientific article; zbMATH DE number 5957408
From MaRDI portal
Publication:3174050
zbMATH Open1222.62008MaRDI QIDQ3174050FDOQ3174050
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v7/zhao06a.html
Title of this publication is not available (Why is that?)
Recommendations
- Model selection consistency of Lasso for empirical data
- Lasso with convex loss: model selection consistency and estimation
- A note on the Lasso and related procedures in model selection
- A random model approach for the LASSO
- Strong consistency of Lasso estimators
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Regularizing LASSO: a consistent variable selection method
- Model selection consistency of \(U\)-statistics with convex loss and weighted Lasso penalty
- Model selection with mixed variables on the Lasso path
- Improving Lasso for model selection and prediction
Cited In (only showing first 100 items - show all)
- Adaptive and reversed penalty for analysis of high-dimensional correlated data
- Fitting sparse linear models under the sufficient and necessary condition for model identification
- Nonsmoothness in machine learning: specific structure, proximal identification, and applications
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Penalized logspline density estimation using total variation penalty
- Image denoising via solution paths
- Estimation of an oblique structure via penalized likelihood factor analysis
- Structured variable selection via prior-induced hierarchical penalty functions
- A penalized likelihood method for structural equation modeling
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Regularization and the small-ball method. I: Sparse recovery
- Parametric and semiparametric reduced-rank regression with flexible sparsity
- Model selection consistency of Lasso for empirical data
- Nonparametric independence screening via favored smoothing bandwidth
- Linear hypothesis testing in dense high-dimensional linear models
- On the post selection inference constant under restricted isometry properties
- Broken adaptive ridge regression and its asymptotic properties
- Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions
- Low complexity regularization of linear inverse problems
- Discussion: Latent variable graphical model selection via convex optimization
- Sensitivity analysis for mirror-stratifiable convex functions
- Penalized Regression for Multiple Types of Many Features With Missing Data
- A tuning-free robust and efficient approach to high-dimensional regression
- Robust machine learning by median-of-means: theory and practice
- Adaptive estimation of covariance matrices via Cholesky decomposition
- Uniformly valid confidence sets based on the Lasso
- Tuning parameter calibration for \(\ell_1\)-regularized logistic regression
- Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments
- Simultaneous feature selection and clustering based on square root optimization
- High-dimensional sparse portfolio selection with nonnegative constraint
- REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES
- Exploiting Disagreement Between High-Dimensional Variable Selectors for Uncertainty Visualization
- Learning theory approach to a system identification problem involving atomic norm
- A First-Order Optimization Algorithm for Statistical Learning with Hierarchical Sparsity Structure
- High-dimensional Cox models: the choice of penalty as part of the model building process
- Efficient nonconvex sparse group feature selection via continuous and discrete optimization
- Second-order Stein: SURE for SURE and other applications in high-dimensional inference
- Online streaming feature selection using rough sets
- Robust inference for high‐dimensional single index models
- Efficient Threshold Selection for Multivariate Total Variation Denoising
- High-dimensional model recovery from random sketched data by exploring intrinsic sparsity
- Lassoing the HAR model: a model selection perspective on realized volatility dynamics
- A review of Gaussian Markov models for conditional independence
- Sparse Laplacian shrinkage with the graphical Lasso estimator for regression problems
- Penalized wavelet estimation and robust denoising for irregular spaced data
- Title not available (Why is that?)
- Risk consistency of cross-validation with Lasso-type procedures
- High-dimensional linear model selection motivated by multiple testing
- Sparse regression at scale: branch-and-bound rooted in first-order optimization
- High-dimensional regression with potential prior information on variable importance
- Large-scale multivariate sparse regression with applications to UK Biobank
- Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- The adaptive Lasso in high-dimensional sparse heteroscedastic models
- On the oracle property of adaptive group Lasso in high-dimensional linear models
- Sparse estimation via nonconcave penalized likelihood in factor analysis model
- Robust Variable and Interaction Selection for Logistic Regression and General Index Models
- Learning high-dimensional directed acyclic graphs with latent and selection variables
- Covariate assisted screening and estimation
- Group selection in high-dimensional partially linear additive models
- Adaptive Dantzig density estimation
- Combining a relaxed EM algorithm with Occam's razor for Bayesian variable selection in high-dimensional regression
- Influence measures and stability for graphical models
- High dimensional discrimination analysis via a semiparametric model
- Nonnegative adaptive Lasso for ultra-high dimensional regression models and a two-stage method applied in financial modeling
- Sub-optimality of some continuous shrinkage priors
- Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization
- Bridge estimators and the adaptive Lasso under heteroscedasticity
- A note on the Lasso and related procedures in model selection
- Shrinkage estimation for identification of linear components in additive models
- Determination of vector error correction models in high dimensions
- Some theoretical results on the grouped variables Lasso
- An analysis of penalized interaction models
- Testing a single regression coefficient in high dimensional linear models
- Skinny Gibbs: a consistent and scalable Gibbs sampler for model selection
- A majorization-minimization approach to variable selection using spike and slab priors
- Rejoinder: Latent variable graphical model selection via convex optimization
- Sparse regression with exact clustering
- The Lasso problem and uniqueness
- The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso)
- On constrained and regularized high-dimensional regression
- Graphical-model based high dimensional generalized linear models
- Bayesian linear regression with sparse priors
- Discussion: Latent variable graphical model selection via convex optimization
- Statistical consistency of coefficient-based conditional quantile regression
- LOL selection in high dimension
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Near-ideal model selection by \(\ell _{1}\) minimization
- Bayesian factor-adjusted sparse regression
- The lasso under Poisson-like heteroscedasticity
- ``Preconditioning for feature selection and regression in high-dimensional problems
- Asymptotic properties of bridge estimators in sparse high-dimensional regression models
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Sparse wavelet regression with multiple predictive curves
- QUADRO: a supervised dimension reduction method via Rayleigh quotient optimization
- On model selection consistency of regularized M-estimators
- Minimax risks for sparse regressions: ultra-high dimensional phenomenons
- Nonnegative-Lasso and application in index tracking
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3174050)