A framework for parallel and distributed training of neural networks
From MaRDI portal
Publication:2181060
DOI10.1016/j.neunet.2017.04.004zbMath1434.68523arXiv1610.07448OpenAlexW2543147921WikidataQ38800301 ScholiaQ38800301MaRDI QIDQ2181060
Paolo Di Lorenzo, Simone Scardapane
Publication date: 18 May 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1610.07448
Artificial neural networks and deep learning (68T07) Nonconvex programming, global optimization (90C26) Distributed algorithms (68W15)
Related Items (2)
Banzhaf random forests: cooperative game theory based random forests with consistency ⋮ Distributed stochastic configuration networks with cooperative learning paradigm
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Distributed learning for random vector functional-link networks
- Bounded approximate decentralised coordination via the max-sum algorithm
- Distributed average consensus with least-mean-square deviation
- Discrete-time dynamic average consensus
- A decentralized training algorithm for echo state networks in distributed big data applications
- Distributed semi-supervised support vector machines
- Adopt: asynchronous distributed constraint optimization with quality guarantees
- Fast linear iterations for distributed averaging
- Adaptation, Learning, and Optimization over Networks
- Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis
- Distributed Sparse Linear Regression
- Sparse Distributed Learning Based on Diffusion Adaptation
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- A Limited Memory Algorithm for Bound Constrained Optimization
- A Collaborative Training Algorithm for Distributed Learning
- On Iteratively Reweighted Algorithms for Nonsmooth Nonconvex Optimization in Computer Vision
- Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A framework for parallel and distributed training of neural networks