Differentially private SGD with random features
From MaRDI portal
Publication:6542573
Recommendations
- Differentially private empirical risk minimization
- Learning with differential privacy: stability, learnability and the sufficiency and necessity of ERM principle
- Survey on privacy preserving techniques for machine learning
- Differentially private distributed online learning over time‐varying digraphs via dual averaging
- Reconciling privacy and utility: an unscented Kalman filter-based framework for differentially private machine learning
Cites work
- Capacity dependent analysis for functional online learning algorithms
- Convergence of unregularized online learning algorithms
- Differentially private SGD with non-smooth losses
- Differentially private empirical risk minimization
- Fast and strong convergence of online learning algorithms
- High-dimensional statistics. A non-asymptotic viewpoint
- Learning Theory
- Learning theory estimates via integral operators and their approximations
- Nonparametric stochastic approximation with large step-sizes
- Online Regularized Classification Algorithms
- Online gradient descent algorithms for functional data learning
- Online gradient descent learning algorithms
- Optimal rates for multi-pass stochastic gradient methods
- Optimum bounds for the distributions of martingales in Banach spaces
- Private stochastic convex optimization: optimal rates in linear time
- The algorithmic foundations of differential privacy
- Theory of Cryptography
- Theory of Reproducing Kernels
This page was built for publication: Differentially private SGD with random features
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6542573)