Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

From MaRDI portal
Publication:2851871

DOI10.1007/978-3-642-40328-6_26zbMATH Open1332.68061arXiv1407.2674OpenAlexW2805580603MaRDI QIDQ2851871FDOQ2851871


Authors: Amos Beimel, Kobbi Nissim, Uri Stemmer Edit this on Wikidata


Publication date: 4 October 2013

Published in: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (Search for Journal in Brave)

Abstract: We compare the sample complexity of private learning [Kasiviswanathan et al. 2008] and sanitization~[Blum et al. 2008] under pure epsilon-differential privacy [Dwork et al. TCC 2006] and approximate (epsilon,delta)-differential privacy [Dwork et al. Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call Quasi-Concave Promise Problems, that generalizes some of our considered tasks. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm solving such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of label-private learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy.


Full work available at URL: https://arxiv.org/abs/1407.2674




Recommendations





Cited In (99)





This page was built for publication: Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2851871)