boost
From MaRDI portal
Software:51357
swMATH35655MaRDI QIDQ51357FDOQ51357
Author name not available (Why is that?)
Cited In (41)
- Bayesian variable selection in multinomial probit model for classifying high-dimensional data
- Feature selection when there are many influential features
- SGL-SVM: a novel method for tumor classification via support vector machine with sparse group lasso
- Sparse bayesian kernel multinomial probit regression model for high-dimensional data classification
- A novel convex clustering method for high-dimensional data using semiproximal ADMM
- Stabilizing the Lasso against cross-validation variability
- Sparse sufficient dimension reduction using optimal scoring
- A simple approach to sparse clustering
- Title not available (Why is that?)
- An empirical bias–variance analysis of DECORATE ensemble method at different training sample sizes
- High-dimensional variable screening and bias in subsequent inference, with an empirical comparison
- A novel hybrid dimension reduction technique for undersized high dimensional gene expression data sets using information complexity criterion for cancer classification
- Sparse optimal scoring for multiclass cancer diagnosis and biomarker detection using microarray data
- Overfitting, generalization, and MSE in class probability estimation with high‐dimensional data
- Rejoinder: Boosting algorithms: regularization, prediction and model fitting
- Biomarker discovery: classification using pooled samples
- On the distance concentration awareness of certain data reduction techniques
- Approximation Bounds for Sparse Programs
- Safe feature screening rules for the regularized Huber regression
- Asymtotics of Dantzig selector for a general single-index model
- Variable selection for sparse logistic regression
- General sparse multi-class linear discriminant analysis
- Sparse HDLSS discrimination with constrained data piling
- Selecting marker genes for cancer classification using supervised weighted kernel clustering and the support vector machine
- Bias-Corrected Diagonal Discriminant Rules for High-Dimensional Classification
- Gene boosting for cancer classification based on gene expression profiles
- Bayesian semiparametric model for pathway-based analysis with zero-inflated clinical outcomes
- Regularized \(k\)-means clustering of high-dimensional data and its asymptotic consistency
- Best subset selection via a modern optimization lens
- The Partial Linear Model in High Dimensions
- Multicategory vertex discriminant analysis for high-dimensional data
- Statistical significance in high-dimensional linear models
- Sparse Bayesian variable selection in kernel probit model for analyzing high-dimensional data
- Boosting algorithms: regularization, prediction and model fitting
- Distributionally Robust Inverse Covariance Estimation: The Wasserstein Shrinkage Estimator
- Sparse generalized canonical correlation analysis via linearized Bregman method
- Higher criticism for large-scale inference, especially for rare and weak effects
- A convex optimization approach to high-dimensional sparse quadratic discriminant analysis
- When is `nearest neighbour' meaningful: A converse theorem and implications
- A distribution-based Lasso for a general single-index model
- High-dimensional clustering via random projections
This page was built for software: boost