SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
DOI10.1016/J.AUTOMATICA.2023.111460arXiv2202.05800MaRDI QIDQ6152580FDOQ6152580
Authors: Nicolò Dal Fabbro, Subhrakanti Dey, Michele Rossi, Luca Schenato
Publication date: 13 February 2024
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2202.05800
Recommendations
Newton methoddistributed optimizationheterogeneous networkssuper-linear convergencefederated learningnon i.i.d. data
Numerical optimization and variational techniques (65K10) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Title not available (Why is that?)
- On the limited memory BFGS method for large scale optimization
- Title not available (Why is that?)
- Advances and Open Problems in Federated Learning
- Distributed adaptive Newton methods with global superlinear convergence
- Analysis and Linear Algebra: The Singular Value Decomposition and Applications
This page was built for publication: SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6152580)