Memory-Sample Lower Bounds for Learning Parity with Noise
From MaRDI portal
Publication:6090918
DOI10.4230/LIPICS.APPROX/RANDOM.2021.60arXiv2107.02320OpenAlexW3202425833MaRDI QIDQ6090918FDOQ6090918
Authors: Sumegha Garg, Pravesh Kothari, Pengda Liu, Ran Raz
Publication date: 20 November 2023
Abstract: In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn from a stream of random linear equations over that are correct with probability and flipped with probability , that any learning algorithm requires either a memory of size or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [GRT'18], when the samples are noisy. A matrix corresponds to the following learning problem with error parameter : an unknown element is chosen uniformly at random. A learner tries to learn from a stream of samples, , where for every , is chosen uniformly at random and with probability and with probability (). Assume that are such that any submatrix of of at least rows and at least columns, has a bias of at most . We show that any learning algorithm for the learning problem corresponding to , with error, requires either a memory of size at least , or at least samples. In particular, this shows that for a large class of learning problems, same as those in [GRT'18], any learning algorithm requires either a memory of size at least or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Raz'17,GRT'18] to the noisy case.
Full work available at URL: https://arxiv.org/abs/2107.02320
Cited In (2)
This page was built for publication: Memory-Sample Lower Bounds for Learning Parity with Noise
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6090918)