Differentially private SGD with random features
From MaRDI portal
Publication:6542573
DOI10.1007/S11766-024-5037-0MaRDI QIDQ6542573FDOQ6542573
Authors: Yiguang Wang, Zheng-Chu Guo
Publication date: 22 May 2024
Published in: Applied Mathematics. Series B (English Edition) (Search for Journal in Brave)
learning theoryreproducing kernel Hilbert spacesdifferential privacystochastic gradient descentrandom features
Learning and adaptive systems in artificial intelligence (68T05) Applications of mathematical programming (90C90) Privacy of data (68P27)
Cites Work
- Theory of Reproducing Kernels
- Nonparametric stochastic approximation with large step-sizes
- Learning Theory
- High-dimensional statistics. A non-asymptotic viewpoint
- Differentially private empirical risk minimization
- Theory of Cryptography
- Learning theory estimates via integral operators and their approximations
- Online gradient descent learning algorithms
- Optimum bounds for the distributions of martingales in Banach spaces
- Online Regularized Classification Algorithms
- The algorithmic foundations of differential privacy
- Fast and strong convergence of online learning algorithms
- Optimal rates for multi-pass stochastic gradient methods
- Convergence of unregularized online learning algorithms
- Online gradient descent algorithms for functional data learning
- Private stochastic convex optimization: optimal rates in linear time
- Differentially private SGD with non-smooth losses
- Capacity dependent analysis for functional online learning algorithms
This page was built for publication: Differentially private SGD with random features
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6542573)