Kernels as features: on kernels, margins, and low-dimensional mappings
From MaRDI portal
Publication:851869
DOI10.1007/S10994-006-7550-1zbMATH Open1470.68077OpenAlexW2103164654MaRDI QIDQ851869FDOQ851869
Authors: Maria-Florina Balcan, Santosh S. Vempala, Avrim Blum
Publication date: 22 November 2006
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-006-7550-1
Recommendations
Learning and adaptive systems in artificial intelligence (68T05) Computational aspects of data analysis and big data (68T09)
Cites Work
- Support-vector networks
- An introduction to support vector machines and other kernel-based learning methods.
- Extensions of Lipschitz mappings into a Hilbert space
- Title not available (Why is that?)
- Large margin classification using the perceptron algorithm
- Title not available (Why is that?)
- Title not available (Why is that?)
- Structural risk minimization over data-dependent hierarchies
- Advances in large-margin classifiers
- Database-friendly random projections: Johnson-Lindenstrauss with binary coins.
- Title not available (Why is that?)
- 10.1162/153244303321897681
- A PAC-Bayesian margin bound for linear classifiers
- An algorithmic theory of learning: Robust concepts and random projection
- 10.1162/153244303765208403
Cited In (19)
- Similarity, kernels, and the fundamental constraints on cognition
- Title not available (Why is that?)
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
- MREKLM: a fast multiple empirical kernel learning machine
- Feature elimination in kernel machines in moderately high dimensions
- Parameterized attribute reduction with Gaussian kernel based fuzzy rough sets
- Adaptive metric dimensionality reduction
- Random Projection and Recovery for High Dimensional Optimization with Arbitrary Outliers
- Data-independent random projections from the feature-map of the homogeneous polynomial kernel of degree two
- A theory of learning with similarity functions
- Algorithmic Learning Theory
- Theory and Algorithm for Learning with Dissimilarity Functions
- Toward a unified theory of sparse dimensionality reduction in Euclidean space
- Deep learning: a statistical viewpoint
- Structure from randomness in halfspace learning with the zero-one loss
- On the impossibility of dimension reduction for doubling subsets of \(\ell_{p}\)
- Beam search algorithms for multilabel learning
- Large scale analysis of generalization error in learning using margin based classification methods
- Sparse learning for large-scale and high-dimensional data: a randomized convex-concave optimization approach
Uses Software
This page was built for publication: Kernels as features: on kernels, margins, and low-dimensional mappings
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q851869)