Sparse extractor families for all the entropy
From MaRDI portal
Publication:2986901
Abstract: We consider the problem of extracting entropy by sparse transformations, namely functions with a small number of overall input-output dependencies. In contrast to previous works, we seek extractors for essentially all the entropy without any assumption on the underlying distribution beyond a min-entropy requirement. We give two simple constructions of sparse extractor families, which are collections of sparse functions such that for any distribution X on inputs of sufficiently high min-entropy, the output of most functions from the collection on a random input chosen from X is statistically close to uniform. For strong extractor families (i.e., functions in the family do not take additional randomness) we give upper and lower bounds on the sparsity that are tight up to a constant factor for a wide range of min-entropies. We then prove that for some min-entropies weak extractor families can achieve better sparsity. We show how this construction can be used towards more efficient parallel transformation of (non-uniform) one-way functions into pseudorandom generators. More generally, sparse extractor families can be used instead of pairwise independence in various randomized or nonuniform settings where preserving locality (i.e., parallelism) is of interest.
Recommendations
Cites work
- scientific article; zbMATH DE number 3154781 (Why is no real title available?)
- scientific article; zbMATH DE number 67625 (Why is no real title available?)
- scientific article; zbMATH DE number 67631 (Why is no real title available?)
- scientific article; zbMATH DE number 1559537 (Why is no real title available?)
- A model of interactive teaching
- A theory of goal-oriented communication
- A theory of the learnable
- Algorithmic Learning Theory
- Derandomizing polynomial identity tests means proving circuit lower bounds
- In search of an easy witness: Exponential time vs. probabilistic polynomial time.
- Learning from different teachers
- Measuring teachability using variants of the teaching dimension
- Models of cooperative teaching and learning
- Occam's razor
- On specifying Boolean functions by labelled examples
- On the complexity of teaching
- On the limits of efficient teachability
- On the power of inductive inference from good examples
- Pseudorandom generators for space-bounded computation
- Recent Developments in Algorithmic Teaching
- Teachability in computational learning
- Teaching Randomized Learners
- Teaching a smarter learner.
This page was built for publication: Sparse extractor families for all the entropy
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2986901)