Pages that link to "Item:Q939654"
From MaRDI portal
The following pages link to The sparsity and bias of the LASSO selection in high-dimensional linear regression (Q939654):
Displaying 50 items.
- \(\ell_0\)-regularized high-dimensional accelerated failure time model (Q2129574) (← links)
- Inference for low-rank tensors -- no need to debias (Q2131273) (← links)
- Adaptive log-density estimation (Q2131904) (← links)
- De-biasing the Lasso with degrees-of-freedom adjustment (Q2136990) (← links)
- Bayesian factor-adjusted sparse regression (Q2155305) (← links)
- Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors (Q2172011) (← links)
- Double-slicing assisted sufficient dimension reduction for high-dimensional censored data (Q2215728) (← links)
- Which bridge estimator is the best for variable selection? (Q2215760) (← links)
- A general framework for Bayes structured linear models (Q2215762) (← links)
- Pivotal estimation via square-root lasso in nonparametric regression (Q2249850) (← links)
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison (Q2259726) (← links)
- Consistency of Bayesian linear model selection with a growing number of parameters (Q2276179) (← links)
- Sorted concave penalized regression (Q2284364) (← links)
- Adaptive group bridge selection in the semiparametric accelerated failure time model (Q2293393) (← links)
- High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking (Q2302521) (← links)
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation (Q2325349) (← links)
- A knockoff filter for high-dimensional selective inference (Q2328050) (← links)
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming (Q2330643) (← links)
- Minimax-optimal nonparametric regression in high dimensions (Q2343958) (← links)
- A new test for part of high dimensional regression coefficients (Q2348453) (← links)
- An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization (Q2352420) (← links)
- Goodness-of-fit tests for high-dimensional Gaussian linear models (Q2380086) (← links)
- Simultaneous analysis of Lasso and Dantzig selector (Q2388978) (← links)
- D-trace estimation of a precision matrix using adaptive lasso penalties (Q2418368) (← links)
- A nonconvex model with minimax concave penalty for image restoration (Q2420696) (← links)
- Some improved estimation strategies in high-dimensional semiparametric regression models with application to riboflavin production data (Q2423185) (← links)
- A simple homotopy proximal mapping algorithm for compressive sensing (Q2425244) (← links)
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators (Q2426826) (← links)
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models (Q2429925) (← links)
- Multi-stage convex relaxation for feature selection (Q2435243) (← links)
- Calibrating nonconvex penalized regression in ultra-high dimension (Q2438760) (← links)
- Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood (Q2447659) (← links)
- Adaptive Lasso estimators for ultrahigh dimensional generalized linear models (Q2453901) (← links)
- Endogeneity in high dimensions (Q2510821) (← links)
- Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models (Q2516626) (← links)
- Nearly optimal Bayesian shrinkage for high-dimensional regression (Q2683046) (← links)
- Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression (Q2687439) (← links)
- High-dimensional sparse portfolio selection with nonnegative constraint (Q2700403) (← links)
- Multiple structural breaks in cointegrating regressions: a model selection approach (Q2700541) (← links)
- Robust group non-convex estimations for high-dimensional partially linear models (Q2811266) (← links)
- Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space (Q2861817) (← links)
- Variable selection for semiparametric regression models with iterated penalisation (Q2892927) (← links)
- Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization (Q2911662) (← links)
- Identification of Partially Linear Structure in Additive Models with an Application to Gene Expression Prediction from Sequences (Q2912335) (← links)
- Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model (Q2954238) (← links)
- On Cross-Validation for Sparse Reduced Rank Regression (Q3120104) (← links)
- A NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMING (Q3122035) (← links)
- Variable selection in high-dimensional partly linear additive models (Q3145401) (← links)
- Greedy forward regression for variable screening (Q4639813) (← links)
- A Mixed-Integer Fractional Optimization Approach to Best Subset Selection (Q4995087) (← links)