Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
From MaRDI portal
Publication:2353006
DOI10.1007/S10994-014-5466-8zbMATH Open1331.68183OpenAlexW2111662199WikidataQ58550247 ScholiaQ58550247MaRDI QIDQ2353006FDOQ2353006
Authors: Robert J. Durrant, Ata Kabán
Publication date: 7 July 2015
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-014-5466-8
Recommendations
Cites Work
- LIBLINEAR: a library for large linear classification
- Regularized linear discriminant analysis and its application in microarrays
- The elements of statistical learning. Data mining, inference, and prediction
- Matrix Analysis
- Title not available (Why is that?)
- Random forests
- Bagging predictors
- Comparison of Discrimination Methods for the Classification of Tumors Using Gene Expression Data
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- A well-conditioned estimator for large-dimensional covariance matrices
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Boosting the margin: a new explanation for the effectiveness of voting methods
- Pattern classification.
- An elementary proof of a theorem of Johnson and Lindenstrauss
- Title not available (Why is that?)
- On consistency and sparsity for principal components analysis in high dimensions
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- On variants of the Johnson–Lindenstrauss lemma
- Random Matrix Theory and Wireless Communications
- Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
- Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform
- On over-fitting in model selection and subsequent selection bias in performance evaluation
- A Random Matrix-Theoretic Approach to Handling Singular Covariance Estimates
- An algorithmic theory of learning: Robust concepts and random projection
- Singular vectors under random perturbation
- Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
Cited In (13)
- Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
- Random-projection ensemble classification. (With discussion).
- Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices
- Title not available (Why is that?)
- Indefinite proximity learning: a review
- Regularizing axis-aligned ensembles via data rotations that favor simpler learners
- Correlations between random projections and the bivariate normal
- Title not available (Why is that?)
- Dimension-adaptive bounds on compressive FLD classification
- Regularization of projection directions via best basis selection approach
- Stability of random-projection based classifiers. The Bayes error perspective
- High-dimensional clustering via random projections
- Increasing the accuracy of solving discrete ill-posed problems by the random projection method
Uses Software
This page was built for publication: Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2353006)