The following pages link to boost (Q51357):
Displayed 41 items.
- Higher criticism for large-scale inference, especially for rare and weak effects (Q254401) (← links)
- Best subset selection via a modern optimization lens (Q282479) (← links)
- A novel hybrid dimension reduction technique for undersized high dimensional gene expression data sets using information complexity criterion for cancer classification (Q308795) (← links)
- Asymtotics of Dantzig selector for a general single-index model (Q328839) (← links)
- Statistical significance in high-dimensional linear models (Q373525) (← links)
- Feature selection when there are many influential features (Q396025) (← links)
- Boosting algorithms: regularization, prediction and model fitting (Q449780) (← links)
- Rejoinder: Boosting algorithms: regularization, prediction and model fitting (Q449785) (← links)
- Multicategory vertex discriminant analysis for high-dimensional data (Q542928) (← links)
- On the distance concentration awareness of certain data reduction techniques (Q614077) (← links)
- A convex optimization approach to high-dimensional sparse quadratic discriminant analysis (Q820816) (← links)
- Selecting marker genes for cancer classification using supervised weighted kernel clustering and the support vector machine (Q961343) (← links)
- Sparse optimal scoring for multiclass cancer diagnosis and biomarker detection using microarray data (Q1004954) (← links)
- Stabilizing the Lasso against cross-validation variability (Q1615230) (← links)
- A simple approach to sparse clustering (Q1658539) (← links)
- General sparse multi-class linear discriminant analysis (Q1659183) (← links)
- Sparse HDLSS discrimination with constrained data piling (Q1663205) (← links)
- Regularized \(k\)-means clustering of high-dimensional data and its asymptotic consistency (Q1950809) (← links)
- A novel convex clustering method for high-dimensional data using semiproximal ADMM (Q2004149) (← links)
- A distribution-based Lasso for a general single-index model (Q2018911) (← links)
- High-dimensional clustering via random projections (Q2129311) (← links)
- Sparse Bayesian variable selection in kernel probit model for analyzing high-dimensional data (Q2184408) (← links)
- Sparse generalized canonical correlation analysis via linearized Bregman method (Q2191828) (← links)
- Variable selection for sparse logistic regression (Q2202033) (← links)
- Biomarker discovery: classification using pooled samples (Q2255767) (← links)
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison (Q2259726) (← links)
- Gene boosting for cancer classification based on gene expression profiles (Q2270794) (← links)
- When is `nearest neighbour' meaningful: A converse theorem and implications (Q2272155) (← links)
- SGL-SVM: a novel method for tumor classification via support vector machine with sparse group lasso (Q2288505) (← links)
- Bayesian variable selection in multinomial probit model for classifying high-dimensional data (Q2354735) (← links)
- Sparse sufficient dimension reduction using optimal scoring (Q2359474) (← links)
- Bayesian semiparametric model for pathway-based analysis with zero-inflated clinical outcomes (Q2363720) (← links)
- Safe feature screening rules for the regularized Huber regression (Q2656712) (← links)
- Bias-Corrected Diagonal Discriminant Rules for High-Dimensional Classification (Q3076039) (← links)
- (Q4558173) (← links)
- Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator (Q5031024) (← links)
- Approximation Bounds for Sparse Programs (Q5073726) (← links)
- An empirical bias–variance analysis of DECORATE ensemble method at different training sample sizes (Q5126995) (← links)
- The Partial Linear Model in High Dimensions (Q5251496) (← links)
- Overfitting, generalization, and MSE in class probability estimation with high‐dimensional data (Q5416417) (← links)
- Sparse bayesian kernel multinomial probit regression model for high-dimensional data classification (Q5860777) (← links)