A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights
Publication:2293057
DOI10.1016/j.ins.2016.09.016zbMath1429.68205OpenAlexW2508387985MaRDI QIDQ2293057
Publication date: 6 February 2020
Published in: Information Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.ins.2016.09.016
consensusdistributed optimizationdistributed cooperative learningfeedforward neural network with random weights (FNNRW)
Artificial neural networks and deep learning (68T07) Learning and adaptive systems in artificial intelligence (68T05) Agent technology and artificial intelligence (68T42)
Related Items (9)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Distributed learning for random vector functional-link networks
- Distributed stochastic subgradient projection algorithms for convex optimization
- Multilayer feedforward networks are universal approximators
- Editorial: Randomized algorithms for training neural networks
- A survey of randomized algorithms for training neural networks
- A decentralized training algorithm for echo state networks in distributed big data applications
- Information Exchange and Learning Dynamics Over Weakly Connected Adaptive Networks
- Incremental Stochastic Subgradient Algorithms for Convex Optimization
- Matrix Analysis
- Incremental Adaptive Strategies Over Distributed Networks
- Sensor Networks With Random Links: Topology Design for Distributed Consensus
- Distributed Sparse Linear Regression
- Diffusion LMS Strategies for Distributed Estimation
- Adaptive Robust Distributed Learning in Diffusion Sensor Networks
- Distributed Basis Pursuit
- Diffusion Adaptation Strategies for Distributed Optimization and Learning Over Networks
- Sparse Distributed Learning Based on Diffusion Adaptation
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Dictionary Learning Over Distributed Models
- Distributed Subgradient Methods for Multi-Agent Optimization
- Distributed and Recursive Parameter Estimation in Parametrized Linear State-Space Models
- Gossip Algorithms for Convex Consensus Optimization Over Networks
- Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case
This page was built for publication: A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights