Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
From MaRDI portal
Publication:2353006
DOI10.1007/s10994-014-5466-8zbMath1331.68183OpenAlexW2111662199WikidataQ58550247 ScholiaQ58550247MaRDI QIDQ2353006
Publication date: 7 July 2015
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-014-5466-8
Related Items (9)
High-dimensional clustering via random projections ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Increasing the accuracy of solving discrete ill-posed problems by the random projection method ⋮ Indefinite Proximity Learning: A Review ⋮ Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices ⋮ Regularizing axis-aligned ensembles via data rotations that favor simpler learners ⋮ Correlations between random projections and the bivariate normal ⋮ Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bagging predictors
- Regularized linear discriminant analysis and its application in microarrays
- A well-conditioned estimator for large-dimensional covariance matrices
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
- An algorithmic theory of learning: Robust concepts and random projection
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- On variants of the Johnson–Lindenstrauss lemma
- Matrix Analysis
- Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- An elementary proof of a theorem of Johnson and Lindenstrauss
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- A Random Matrix-Theoretic Approach to Handling Singular Covariance Estimates
- Singular vectors under random perturbation
- Random Matrix Theory and Wireless Communications
- The elements of statistical learning. Data mining, inference, and prediction
- Random forests
This page was built for publication: Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions