From average case complexity to improper learning complexity
From MaRDI portal
Publication:5259579
Abstract: The basic problem in the PAC model of computational learning theory is to determine which hypothesis classes are efficiently learnable. There is presently a dearth of results showing hardness of learning problems. Moreover, the existing lower bounds fall short of the best known algorithms. The biggest challenge in proving complexity results is to establish hardness of {em improper learning} (a.k.a. representation independent learning).The difficulty in proving lower bounds for improper learning is that the standard reductions from -hard problems do not seem to apply in this context. There is essentially only one known approach to proving lower bounds on improper learning. It was initiated in (Kearns and Valiant 89) and relies on cryptographic assumptions. We introduce a new technique for proving hardness of improper learning, based on reductions from problems that are hard on average. We put forward a (fairly strong) generalization of Feige's assumption (Feige 02) about the complexity of refuting random constraint satisfaction problems. Combining this assumption with our new technique yields far reaching implications. In particular, 1. Learning 's is hard. 2. Agnostically learning halfspaces with a constant approximation ratio is hard. 3. Learning an intersection of halfspaces is hard.
Recommendations
- The complexity of properly learning simple concept classes
- Complexity theoretic limitations on learning halfspaces
- Complexity theoretic hardness results for query learning
- Hardness results for agnostically learning low-degree polynomial threshold functions
- On agnostic learning of parities, monomials, and halfspaces
Cites work
- scientific article; zbMATH DE number 5485440 (Why is no real title available?)
- scientific article; zbMATH DE number 5485574 (Why is no real title available?)
- Advances in Cryptology – CRYPTO 2004
- Answering \(n^{2+o(1)}\) counting queries with differential privacy is hard
- Bounds on the sample complexity for private learning and private data release
- Characterizing the sample complexity of private learners
- Collusion-secure fingerprinting for digital data
- Differential privacy and the fat-shattering dimension of linear queries
- Efficient algorithms for privately releasing marginals via convex relaxations
- Faster algorithms for privately releasing marginals
- Faster private release of marginals on small databases
- Interactive privacy via the median mechanism
- Iterative Constructions and Private Data Release
- Lower bounds in differential privacy
- New Efficient Attacks on Statistical Disclosure Control Mechanisms
- On the complexity of differentially private data release, efficient algorithms and hardness results
- On the geometry of differential privacy
- Our Data, Ourselves: Privacy Via Distributed Noise Generation
- Private Learning and Sanitization: Pure vs. Approximate Differential Privacy
- The price of privately releasing contingency tables and the spectra of random matrices with correlated rows
- Theory of Cryptography
Cited in
(12)- Cryptographic hardness for learning intersections of halfspaces
- Distribution-specific hardness of learning neural networks
- Hinge-minimax learner for the ensemble of hyperplanes
- scientific article; zbMATH DE number 7758323 (Why is no real title available?)
- Learning infinite-word automata with loop-index queries
- Limits of preprocessing
- Constructing concrete hard instances of the maximum independent set problem
- Fast pseudorandom functions based on expander graphs
- Improper learning by refuting
- The complexity of properly learning simple concept classes
- Complexity theoretic limitations on learning halfspaces
- scientific article; zbMATH DE number 7561745 (Why is no real title available?)
This page was built for publication: From average case complexity to improper learning complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5259579)