A Collaborative Training Algorithm for Distributed Learning
From MaRDI portal
Publication:4975823
Cited in
(17)- A new class of distributed optimization algorithms: application to regression of distributed data
- A framework for parallel and distributed training of neural networks
- Finite-time average consensus based approach for distributed convex optimization
- Distributed convex optimization based on ADMM and belief propagation methods
- Distributed networked learning with correlated data
- Incremental proximal methods for large scale convex optimization
- Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios
- EXTRA: an exact first-order algorithm for decentralized consensus optimization
- Decentralized ADMM with compressed and event-triggered communication
- Distributed online optimization subject to long-term constraints and time-varying topology: an event-triggered and bandit feedback approach
- scientific article; zbMATH DE number 1948433 (Why is no real title available?)
- Adaptive consensus: a network pruning approach for decentralized optimization
- A decentralized training algorithm for echo state networks in distributed big data applications
- Convergence rate of incremental gradient and incremental Newton methods
- Distributed parametric and nonparametric regression with on-line performance bounds computation
- Adaptive estimation of external fields in reproducing kernel Hilbert spaces
- Communication-efficient distributed cubic Newton with compressed lazy Hessian
This page was built for publication: A Collaborative Training Algorithm for Distributed Learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4975823)