Learning intersections and thresholds of halfspaces
From MaRDI portal
Publication:598257
DOI10.1016/j.jcss.2003.11.002zbMath1074.68026MaRDI QIDQ598257
Rocco A. Servedio, Adam R. Klivans, Ryan O'Donnell
Publication date: 6 August 2004
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jcss.2003.11.002
Fourier analysis; Computational learning theory; Halfspaces; Noise sensitivity; Polynomial threshold functions
68Q32: Computational learning theory
Related Items
Unconditional lower bounds for learning intersections of halfspaces, On PAC learning algorithms for rich Boolean function classes, Learning intersections of halfspaces with a margin, Cryptographic hardness for learning intersections of halfspaces
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning regular sets from queries and counterexamples
- PAC learning intersections of halfspaces with membership queries
- Learning with unreliable boundary queries
- Harmonic analysis and Boolean function complexity
- On using the Fourier transform to learn disjoint DNF
- The expressive power of voting polynomials
- Learning an intersection of a constant number of halfspaces over a uniform distribution
- An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
- A simple algorithm for learning O(log n)-term DNF
- More efficient PAC-learning of DNF with membership queries under the uniform distribution
- PP is closed under intersection
- An \(O(n^{\log \log n})\) learning algorithm for DNF under the uniform distribution
- Learning monotone log-term DNF formulas under the uniform distribution
- Queries and concept learning
- Threshold circuits of bounded depth
- Rational approximation to \(|x|\)
- Weakly learning DNF and characterizing statistical query learning using Fourier analysis
- Constant depth circuits, Fourier transform, and learnability
- The Perceptron: A Model for Brain Functioning. I
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- On the Fourier spectrum of monotone functions
- Learning DNF in time
- Cryptographic hardness of distribution-specific learning
- Hardness amplification within NP
- Noise sensitivity of Boolean functions and applications to percolation