On the privacy of noisy stochastic gradient descent for convex optimization
DOI10.1137/23M1556538MaRDI QIDQ6583673FDOQ6583673
Authors: Jason M. Altschuler, Jinho Bok, Kunal Talwar
Publication date: 6 August 2024
Published in: SIAM Journal on Computing (Search for Journal in Brave)
convex optimizationstochastic gradient Langevin dynamicsdifferential privacynoisy stochastic gradient descentprivacy amplification by iterationshifted divergences
Convex programming (90C25) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87) Privacy of data (68P27)
Cites Work
- Probability with Martingales
- Differentially private empirical risk minimization
- Theory of Cryptography
- Rényi Divergence and Kullback-Leibler Divergence
- Convex optimization: algorithms and complexity
- What can we learn privately?
- High-dimensional Bayesian inference via the unadjusted Langevin algorithm
- Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities
- Sampling from a log-concave distribution with projected Langevin Monte Carlo
- Probability and computing. Randomization and probabilistic techniques in algorithms and data analysis
- Private stochastic convex optimization: optimal rates in linear time
- Private selection from private candidates
This page was built for publication: On the privacy of noisy stochastic gradient descent for convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6583673)