Reconciling privacy and utility: an unscented Kalman filter-based framework for differentially private machine learning
From MaRDI portal
Publication:6097151
DOI10.1007/S10994-022-06279-5OpenAlexW4311834320MaRDI QIDQ6097151FDOQ6097151
Authors:
Publication date: 12 June 2023
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-022-06279-5
Recommendations
- An optimal \((\epsilon, \delta )\)-differentially private learning of distributed deep fuzzy models
- Differentially private empirical risk minimization
- Learning with differential privacy: stability, learnability and the sufficiency and necessity of ERM principle
- Preserving differential privacy in deep neural networks with relevance-based adaptive noise imposition
- Mitigating disparate impact on model accuracy in differentially private learning
machine learningdifferential privacyunscented Kalman filterinference attacksprivacy-utility reconcilement
Cites Work
- Differential Privacy: A Survey of Results
- Differentially private empirical risk minimization
- Our Data, Ourselves: Privacy Via Distributed Noise Generation
- Large-scale machine learning with stochastic gradient descent
- The algorithmic foundations of differential privacy
- Differential privacy under continual observation
- Randomized approximate class-specific kernel spectral regression analysis for large-scale face verification
- On the Relation Between Identifiability, Differential Privacy, and Mutual-Information Privacy
- A deep learning approach using natural language processing and time-series forecasting towards enhanced food safety
Cited In (2)
This page was built for publication: Reconciling privacy and utility: an unscented Kalman filter-based framework for differentially private machine learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097151)