SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing (Q6152580)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing |
scientific article; zbMATH DE number 7803914
| Language | Label | Description | Also known as |
|---|---|---|---|
| default for all languages | No label defined |
||
| English | SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing |
scientific article; zbMATH DE number 7803914 |
Statements
SHED: a Newton-type algorithm for federated learning based on incremental Hessian eigenvector sharing (English)
0 references
13 February 2024
0 references
Newton method
0 references
distributed optimization
0 references
federated learning
0 references
super-linear convergence
0 references
heterogeneous networks
0 references
non i.i.d. data
0 references
0.6972954869270325
0 references
0.6780946850776672
0 references
0.6682544350624084
0 references