Reconciling privacy and utility: an unscented Kalman filter-based framework for differentially private machine learning
From MaRDI portal
Publication:6097151
Recommendations
- An optimal \((\epsilon, \delta )\)-differentially private learning of distributed deep fuzzy models
- Differentially private empirical risk minimization
- Learning with differential privacy: stability, learnability and the sufficiency and necessity of ERM principle
- Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition
- Mitigating disparate impact on model accuracy in differentially private learning
Cites work
- A deep learning approach using natural language processing and time-series forecasting towards enhanced food safety
- Differential Privacy: A Survey of Results
- Differential privacy under continual observation
- Differentially private empirical risk minimization
- Large-scale machine learning with stochastic gradient descent
- On the Relation Between Identifiability, Differential Privacy, and Mutual-Information Privacy
- Our Data, Ourselves: Privacy Via Distributed Noise Generation
- Randomized approximate class-specific kernel spectral regression analysis for large-scale face verification
- The algorithmic foundations of differential privacy
Cited in
(2)
This page was built for publication: Reconciling privacy and utility: an unscented Kalman filter-based framework for differentially private machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097151)