On the difficulty of approximately maximizing agreements.
From MaRDI portal
Publication:1401958
DOI10.1016/S0022-0000(03)00038-2zbMath1053.68054OpenAlexW1968934255MaRDI QIDQ1401958
Shai Ben-David, Philip M. Long, Nadav Eiron
Publication date: 19 August 2003
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0022-0000(03)00038-2
Neural networksComputational learning theoryMachine learningHardnessInapproximabilityAxis-aligned hyper-rectanglesBallsHalf-spacesMonomials
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (17)
Applications of regularized least squares to pattern classification ⋮ An experimental evaluation of simplicity in rule learning ⋮ Algorithmic aspects of determining depth functions in a procedure for optimal hypothesis selection in data classification problems ⋮ A Fisher consistent multiclass loss function with variable margin on positive examples ⋮ A nonlinear kernel SVM Classifier via \(L_{0/1}\) soft-margin loss with classification performance ⋮ Goal scoring, coherent loss and applications to machine learning ⋮ Turning Grain Maps into Diagrams ⋮ A new column generation algorithm for logical analysis of data ⋮ A theory of learning from different domains ⋮ Kernel methods in machine learning ⋮ Maximizing agreements with one-sided error with applications to heuristic learning ⋮ Maximizing agreements with one-sided error with applications to heuristic learning ⋮ Margin-based first-order rule learning ⋮ Unnamed Item ⋮ Quadratic Convergence of Smoothing Newton's Method for 0/1 Loss Optimization ⋮ Hardness results for neural network approximation problems ⋮ The computational complexity of densest region detection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Tracking drifting concepts by minimizing disagreements
- Toward efficient agnostic learning
- The hardness of approximate optima in lattices, codes, and systems of linear equations
- The complexity and approximability of finding maximum feasible subsystems of linear relations
- Robust trainability of single neurons
- Computing the maximum bichromatic discrepancy, with applications to computer graphics and machine learning
- Learning in the Presence of Malicious Errors
- Learnability and the Vapnik-Chervonenkis dimension
- Computational limitations on learning from examples
- Cryptographic limitations on learning Boolean formulae and finite automata
- Cryptographic hardness of distribution-specific learning
This page was built for publication: On the difficulty of approximately maximizing agreements.