Relaxed Lasso
From MaRDI portal
Recommendations
Cites work
- scientific article; zbMATH DE number 47310 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A Statistical View of Some Chemometrics Regression Tools
- Asymptotics for Lasso-type estimators.
- Better Subset Regression Using the Nonnegative Garrote
- High-dimensional graphs and variable selection with the Lasso
- Least angle regression. (With discussion)
- Nonconcave penalized likelihood with a diverging number of parameters.
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Square root penalty: Adaption to the margin in classification and in edge estimation
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(only showing first 100 items - show all)- SLOPE-adaptive variable selection via convex optimization
- A feature selection method for classification based on ensemble of penalized logistic models
- Some sharp performance bounds for least squares regression with L₁ regularization
- Multi-model subset selection
- Discussion of ``Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- Pursuing sparsity and homogeneity for multi-source high-dimensional current status data
- The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods
- On the Effect and Remedies of Shrinkage on Classification Probability Estimation
- Variable selection for semiparametric regression models with iterated penalisation
- Lazy lasso for local regression
- Sure independence screening for ultrahigh dimensional feature space. With discussion and authors' reply
- Estimation in the presence of heteroskedasticity of unknown form: a Lasso-based approach
- Improving the prediction performance of the Lasso by subtracting the additive structural noises
- Sign-constrained least squares estimation for high-dimensional regression
- Degrees of Freedom: Search Cost and Self-Consistency
- FAStEN: An Efficient Adaptive Method for Feature Selection and Estimation in High-Dimensional Functional Regressions
- Mathematical programming for simultaneous feature selection and outlier detection under l1 norm
- Multi-stage convex relaxation for feature selection
- Projective inference in high-dimensional problems: prediction and feature selection
- Interquantile shrinkage and variable selection in quantile regression
- Bayesian reciprocal LASSO quantile regression
- Model selection strategies for identifying most relevant covariates in homoscedastic linear models
- Robust error density estimation in ultrahigh dimensional sparse linear model
- Stabilizing the Lasso against cross-validation variability
- Some theoretical results on the grouped variables Lasso
- Reducing bias and mitigating the influence of excess of zeros in regression covariates with multi-outcome adaptive LAD-lasso
- Markov Neighborhood Regression for High-Dimensional Inference
- Lassoed boosting and linear prediction in the equities market
- High-dimensional generalized linear models and the lasso
- A Bayesian lasso via reversible-jump MCMC
- A transfer sparse identification method for ARX model
- Monotone splines Lasso
- A majorization-minimization approach to variable selection using spike and slab priors
- Stochastic identification of malware with dynamic traces
- Penalized and constrained LAD estimation in fixed and high dimension
- Sparse regulatory networks
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Comparison of lasso type estimators for high-dimensional data
- Detection of gene-gene interactions using multistage sparse and low-rank regression
- Sparse regression with multi-type regularized feature modeling
- Semiparametric regression models with additive nonparametric components and high dimensional parametric components
- A two-step method for estimating high-dimensional Gaussian graphical models
- Network classification with applications to brain connectomics
- A component Lasso
- Empirical comparison study of approximate methods for structure selection in binary graphical models
- High dimensional single index models
- Cross-Fitted Residual Regression for High-Dimensional Heteroscedasticity Pursuit
- Lasso regression: estimation and shrinkage via the limit of Gibbs sampling
- Asymptotic behaviour of penalized robust estimators in logistic regression when dimension increases
- A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al.
- Split Regularized Regression
- Joint estimation and variable selection for mean and dispersion in proper dispersion models
- Improved variable selection with forward-lasso adaptive shrinkage
- High dimensional variable selection through group Lasso for multiple function-on-function linear regression: a case study in \(\mathrm{PM}_{10}\) monitoring
- A comparison of the Lasso and marginal regression
- MIP-BOOST: Efficient and Effective L0 Feature Selection for Linear Regression
- On the impact of model selection on predictor identification and parameter inference
- scientific article; zbMATH DE number 7750672 (Why is no real title available?)
- Clustering, multicollinearity, and singular vectors
- A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates
- High-dimensional partial correlation coefficients: A survey study of estimation Methods
- A discussion on practical considerations with sparse regression methodologies
- Nonlinear mediation analysis with high‐dimensional mediators whose causal structure is unknown
- Thresholding-based iterative selection procedures for model selection and shrinkage
- One component partial least squares, high dimensional regression, data splitting, and the multitude of models
- Regression trees with fused leaves
- A rank-corrected procedure for matrix completion with fixed basis coefficients
- Degrees of freedom for piecewise Lipschitz estimators
- Nonparametric Statistics and High/Infinite Dimensional Data
- On Lasso refitting strategies
- On the robustness of the generalized fused Lasso to prior specifications
- Component-wisely sparse boosting
- Editorial: Statistical learning methods including dimensionality reduction
- Exploiting Disagreement Between High-Dimensional Variable Selectors for Uncertainty Visualization
- A SAEM algorithm for fused Lasso penalized nonlinear mixed effect models: application to group comparison in pharmacokinetics
- Moderately clipped Lasso
- Two tales of variable selection for high dimensional regression: Screening and model building
- Optimal EMG placement for a robotic prosthesis controller with sequential, adaptive functional estimation (SAFE)
- Lasso and probabilistic inequalities for multivariate point processes
- Variables selection using \(\mathcal{L}_0\) penalty
- Comparison of likelihood penalization and variance decomposition approaches for clinical prediction models: a simulation study
- Variable selection in linear regression models: choosing the best subset is not always the best choice
- Bootstrapping some GLM and survival regression variable selection estimators
- scientific article; zbMATH DE number 7306867 (Why is no real title available?)
- Feature selection in machine learning: an exact penalty approach using a difference of convex function algorithm
- Best subset, forward stepwise or Lasso? Analysis and recommendations based on extensive comparisons
- On the sensitivity of the Lasso to the number of predictor variables
- Penalized robust estimators in sparse logistic regression
- Nearly unbiased variable selection under minimax concave penalty
- Lasso meets horseshoe: a survey
- Variable selection approach for zero-inflated count data via adaptive Lasso
- Modeling gene-covariate interactions in sparse regression with group structure for genome-wide association studies
- Stochastic correlation coefficient ensembles for variable selection
- RandGA: injecting randomness into parallel genetic algorithm for variable selection
- Correlated variables in regression: clustering and sparse estimation
- Discussion: One-step sparse estimates in nonconcave penalized likelihood models
- Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals
- Bootstrapping multiple linear regression after variable selection
- Variable Selection With Second-Generation P-Values
- Empirical likelihood test for high dimensional linear models
This page was built for publication: Relaxed Lasso
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1020826)