Gradient complexity and non-stationary views of differentially private empirical risk minimization
From MaRDI portal
Publication:6199392
DOI10.1016/j.tcs.2023.114259OpenAlexW4387745690MaRDI QIDQ6199392
Publication date: 23 February 2024
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2023.114259
convex optimizationsupervised learningnon-convex optimizationdifferential privacyempirical risk minimization
Cites Work
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- The landscape of empirical risk for nonconvex losses
- Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
- The Algorithmic Foundations of Differential Privacy
- Katyusha: the first direct acceleration of stochastic gradient methods
- Private stochastic convex optimization: optimal rates in linear time
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Theory of Cryptography
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Gradient complexity and non-stationary views of differentially private empirical risk minimization