Compactness Hypothesis, Potential Functions, and Rectifying Linear Space in Machine Learning
From MaRDI portal
Publication:6104512
Recommendations
Cites work
- scientific article; zbMATH DE number 4004880 (Why is no real title available?)
- scientific article; zbMATH DE number 4049124 (Why is no real title available?)
- scientific article; zbMATH DE number 3758499 (Why is no real title available?)
- scientific article; zbMATH DE number 3551792 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 194970 (Why is no real title available?)
- scientific article; zbMATH DE number 3806769 (Why is no real title available?)
- scientific article; zbMATH DE number 1832360 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- scientific article; zbMATH DE number 1391397 (Why is no real title available?)
- scientific article; zbMATH DE number 3227378 (Why is no real title available?)
- A Statistical View of Some Chemometrics Regression Tools
- A theory of learning with similarity functions
- A unified approach to pattern recognition
- Dissimilarity representations allow for building good classifiers
- Encyclopedia of Distances
- Modeling binary correlated responses using SAS, SPSS and R
- Multiple kernel learning algorithms
- Regularization and Variable Selection Via the Elastic Net
- Support-vector networks
- The Dissimilarity Representation for Pattern Recognition
- The doubly regularized support vector machine
- Theoretical foundations of the potential function method in pattern recognition learning
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(3)
This page was built for publication: Compactness Hypothesis, Potential Functions, and Rectifying Linear Space in Machine Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6104512)