Efficient and systematic partitioning of large and deep neural networks for parallelization
From MaRDI portal
Publication:6487191
DOI10.1007/978-3-030-85665-6_13zbMATH Open1512.68301MaRDI QIDQ6487191FDOQ6487191
Authors: Haoran Wang, Chong Li, Thibaut Tachon, Hongxing Wang, Sheng Yang, Sébastien Limet, Sophie Robert
Publication date: 31 March 2022
Recommendations
- Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
- A framework for parallel and distributed training of neural networks
- Pipelined model parallelism: complexity results and memory considerations
- Pruning deep convolutional neural networks architectures with evolution strategy
- scientific article; zbMATH DE number 529792
Artificial neural networks and deep learning (68T07) Parallel algorithms in computer science (68W10) Distributed algorithms (68W15)
Cites Work
Cited In (1)
This page was built for publication: Efficient and systematic partitioning of large and deep neural networks for parallelization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6487191)