A decentralized training algorithm for echo state networks in distributed big data applications
From MaRDI portal
Publication:2418177
DOI10.1016/j.neunet.2015.07.006zbMath1414.68074OpenAlexW1544900599WikidataQ30991617 ScholiaQ30991617MaRDI QIDQ2418177
Dianhui Wang, Massimo Panella, Simone Scardapane
Publication date: 3 June 2019
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2015.07.006
recurrent neural networkbig dataalternating direction method of multipliersecho state networkdistributed learning
Related Items
Distributed support vector machine in master-slave mode, A framework for parallel and distributed training of neural networks, Prediction and identification of discrete-time dynamic nonlinear systems based on adaptive echo state network, Online sequential echo state network with sparse RLS algorithm for time series prediction, Broad echo state network for multivariate time series prediction, Controller design based on echo state network with delay output for nonlinear system, Insights into randomized algorithms for neural networks: practical issues and common pitfalls, A zero-gradient-sum algorithm for distributed cooperative learning using a feedforward neural network with random weights, Distributed stochastic configuration networks with cooperative learning paradigm, A stability criterion for discrete-time fractional-order echo state network and its application
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Reservoir computing approaches to recurrent neural network training
- Distributed learning for random vector functional-link networks
- Re-visiting the echo state property
- A generalized LSTM-like training algorithm for second-order recurrent neural networks
- Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
- Decoupled echo state networks with lateral inhibition
- An experimental unification of reservoir computing methods
- Automatic speech recognition using a predictive echo state network classifier
- Learning grammatical structure with Echo State Networks
- Diffusion recursive least-squares for distributed estimation over adaptive networks
- Fast Distributed Average Consensus Algorithms Based on Advection-Diffusion Processes
- Distributed Sparse Linear Regression
- Sparse Distributed Learning Based on Diffusion Adaptation
- Consensus and Cooperation in Networked Multi-Agent Systems
- A Collaborative Training Algorithm for Distributed Learning