Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios
From MaRDI portal
Publication:454450
DOI10.1016/J.PATCOG.2012.05.009zbMath1248.68412DBLPjournals/pr/Martinez-RegoFA12OpenAlexW1969332978WikidataQ58036735 ScholiaQ58036735MaRDI QIDQ454450
Oscar Fontenla-Romero, David Martínez-Rego, Amparo Alonso-Betanzos
Publication date: 5 October 2012
Published in: Pattern Recognition (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.patcog.2012.05.009
Related Items (2)
Distributed cooperative learning over time-varying random networks using a gossip-based communication protocol ⋮ Training a multilayer network with low-memory kernel-and-range projection
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A new convex objective function for the supervised learning of single-layer neural networks
- Array-Based QR-RLS Multichannel Lattice Filtering
- Deterministic Nonperiodic Flow
- Oscillation and Chaos in Physiological Control Systems
- A Collaborative Training Algorithm for Distributed Learning
- Gradient-based variable forgetting factor RLS algorithm in time-varying environments
This page was built for publication: Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios