An algorithmic theory of learning: Robust concepts and random projection
From MaRDI portal
Publication:2499543
DOI10.1007/s10994-006-6265-7zbMath1095.68092OpenAlexW1516726196MaRDI QIDQ2499543
Santosh Vempala, Rosa I. Arriaga
Publication date: 14 August 2006
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-006-6265-7
Related Items
Bioinspired Random Projections for Robust, Sparse Classification ⋮ An algorithmic theory of learning: robust concepts and random projection ⋮ Kernels as features: on kernels, margins, and low-dimensional mappings ⋮ Randomized Complete Pivoting for Solving Symmetric Indefinite Linear Systems ⋮ Sparser Johnson-Lindenstrauss Transforms ⋮ Efficient clustering on Riemannian manifolds: a kernelised random projection approach ⋮ On the hardness of learning intersections of two halfspaces ⋮ A simple test for zero multiple correlation coefficient in high-dimensional normal data using random projection ⋮ Randomized algorithms in numerical linear algebra ⋮ Distance geometry and data science ⋮ Randomized anisotropic transform for nonlinear dimensionality reduction ⋮ Approximate polytope ensemble for one-class classification ⋮ \( \varepsilon \)-isometric dimension reduction for incompressible subsets of \(\ell_p\) ⋮ Random Projection and Recovery for High Dimensional Optimization with Arbitrary Outliers ⋮ Sign rank versus Vapnik-Chervonenkis dimension ⋮ Visual Categorization with Random Projection ⋮ Structure from Randomness in Halfspace Learning with the Zero-One Loss ⋮ Learning intersections of halfspaces with a margin ⋮ The hardest halfspace ⋮ Complexity measures of sign matrices ⋮ Real-valued embeddings and sketches for fast distance and similarity estimation ⋮ Randomized large distortion dimension reduction ⋮ Optimal bounds for sign-representing the intersection of two halfspaces by polynomials ⋮ An algorithmic theory of learning: Robust concepts and random projection ⋮ Random Projection RBF Nets for Multidimensional Density Estimation ⋮ Unnamed Item ⋮ Fast Parallel Estimation of High Dimensional Information Theoretical Quantities with Low Dimensional Random Projection Ensembles ⋮ MREKLM: a fast multiple empirical kernel learning machine ⋮ Optimal Bounds for Johnson-Lindenstrauss Transformations ⋮ Almost Optimal Explicit Johnson-Lindenstrauss Families ⋮ Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach ⋮ Fast dimension reduction using Rademacher series on dual BCH codes ⋮ On the Impossibility of Dimension Reduction for Doubling Subsets of $\ell_{p}$ ⋮ Unnamed Item ⋮ On the perceptron's compression ⋮ Random projections and Hotelling’s T2 statistics for change detection in high-dimensional data streams ⋮ Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On learning a union of half spaces
- The Johnson-Lindenstrauss lemma and the sphericity of some graphs
- Geometric algorithms and combinatorial optimization
- A polynomial-time algorithm for learning noisy linear threshold functions
- Efficient distribution-free learning of probabilistic concepts
- The computational complexity of propositional STRIPS planning
- Boosting the margin: a new explanation for the effectiveness of voting methods
- The geometry of graphs and some of its algorithmic applications
- Support-vector networks
- Large margin classification using the perceptron algorithm
- An algorithmic theory of learning: Robust concepts and random projection
- A neuroidal architecture for cognitive computation
- Extensions of Lipschitz mappings into a Hilbert space
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- Learning Theory
- 10.1162/153244303321897681
- Probability Inequalities for Sums of Bounded Random Variables
- Algorithmic Learning Theory
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- The Relaxation Method for Linear Inequalities