Efficient and systematic partitioning of large and deep neural networks for parallelization
From MaRDI portal
Publication:6487191
Recommendations
- Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
- A framework for parallel and distributed training of neural networks
- Pipelined model parallelism: complexity results and memory considerations
- Pruning deep convolutional neural networks architectures with evolution strategy
- scientific article; zbMATH DE number 529792
Cites work
This page was built for publication: Efficient and systematic partitioning of large and deep neural networks for parallelization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6487191)