Mathematical Research Data Initiative
Main page
Recent changes
Random page
SPARQL
MaRDI@GitHub
New item
Special pages
In other projects
MaRDI portal item
Discussion
View source
View history
English
Log in

Efficient and systematic partitioning of large and deep neural networks for parallelization

From MaRDI portal
Publication:6487191
Jump to:navigation, search

DOI10.1007/978-3-030-85665-6_13zbMATH Open1512.68301MaRDI QIDQ6487191FDOQ6487191


Authors: Haoran Wang, Chong Li, Thibaut Tachon, Hongxing Wang, Sheng Yang, Sébastien Limet, Sophie Robert Edit this on Wikidata


Publication date: 31 March 2022





Recommendations

  • Distributed Deep Learning on Heterogeneous Computing Resources Using Gossip Communication
  • A framework for parallel and distributed training of neural networks
  • Pipelined model parallelism: complexity results and memory considerations
  • Pruning deep convolutional neural networks architectures with evolution strategy
  • scientific article; zbMATH DE number 529792


Mathematics Subject Classification ID

Artificial neural networks and deep learning (68T07) Parallel algorithms in computer science (68W10) Distributed algorithms (68W15)


Cites Work

  • A bridging model for multi-core computing


Cited In (1)

  • Pipelined model parallelism: complexity results and memory considerations





This page was built for publication: Efficient and systematic partitioning of large and deep neural networks for parallelization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6487191)

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:6487191&oldid=37942308"
Tools
What links here
Related changes
Printable version
Permanent link
Page information
This page was last edited on 28 November 2024, at 14:33. Warning: Page may not contain recent updates.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki