A framework for parallel and distributed training of neural networks (Q2181060): Difference between revisions

From MaRDI portal
Created claim: Wikidata QID (P12): Q38800301, #quickstatements; #temporary_batch_1712272666262
Created claim: DBLP publication ID (P1635): journals/nn/ScardapaneL17, #quickstatements; #temporary_batch_1731547958265
 
(2 intermediate revisions by 2 users not shown)
Property / arXiv ID
 
Property / arXiv ID: 1610.07448 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5483032 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4821526 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Limited Memory Algorithm for Bound Constrained Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093335 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sparse Distributed Learning Based on Diffusion Adaptation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5396673 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Parallel Selective Algorithms for Nonconvex Big Data Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2896094 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5405227 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4408093 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Diffusion Least-Mean Squares Over Adaptive Networks: Formulation and Performance Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Sparse Linear Regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adopt: asynchronous distributed constraint optimization with quality guarantees / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5491447 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Iteratively Reweighted Algorithms for Nonsmooth Nonconvex Optimization in Computer Vision / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Collaborative Training Algorithm for Distributed Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bounded approximate decentralised coordination via the max-sum algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptation, Learning, and Optimization over Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed semi-supervised support vector machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: A decentralized training algorithm for echo state networks in distributed big data applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed learning for random vector functional-link networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast linear iterations for distributed averaging / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed average consensus with least-mean-square deviation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discrete-time dynamic average consensus / rank
 
Normal rank
Property / DBLP publication ID
 
Property / DBLP publication ID: journals/nn/ScardapaneL17 / rank
 
Normal rank

Latest revision as of 02:55, 14 November 2024

scientific article
Language Label Description Also known as
English
A framework for parallel and distributed training of neural networks
scientific article

    Statements

    A framework for parallel and distributed training of neural networks (English)
    0 references
    0 references
    0 references
    18 May 2020
    0 references
    neural network
    0 references
    distributed learning
    0 references
    parallel computing
    0 references
    networks
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers