A Geometrical Analysis of Global Stability in Trained Feedback Networks
From MaRDI portal
Publication:5154170
DOI10.1162/NECO_A_01187zbMATH Open1471.93098arXiv1809.02386OpenAlexW2889608198WikidataQ93062152 ScholiaQ93062152MaRDI QIDQ5154170FDOQ5154170
Authors: Francesca Mastrogiuseppe, Srdjan Ostojic
Publication date: 1 October 2021
Published in: Neural Computation (Search for Journal in Brave)
Abstract: Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved in the last years, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple, yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance.
Full work available at URL: https://arxiv.org/abs/1809.02386
Recommendations
- scientific article; zbMATH DE number 2059742
- Global asymptotic stability of a class of feedback neural networks with an application to optimization problems
- On absolute stability of a class of neural networks with feedback
- Global asymptotic stability of a class of dynamical neural networks
- Global stability and plus-global stability. An application to forward neural networks
- scientific article; zbMATH DE number 1195673
- Global stability analysis in Hopfield neural networks
- scientific article; zbMATH DE number 2059895
Artificial neural networks and deep learning (68T07) Feedback control (93B52) Networked control (93B70)
Cites Work
- Neural networks and physical systems with emergent collective computational abilities
- Title not available (Why is that?)
- Title not available (Why is that?)
- Recursive least-squares identification algorithms with incomplete excitation: convergence analysis and application to adaptive control
- Mean-field equations, bifurcation map and route to chaos in discrete time neural networks
- Discrete time recurrent neural network architectures: A unifying review
- Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks
Cited In (5)
- Learn to synchronize, synchronize to learn
- The echo index and multistability in input-driven recurrent neural networks
- Shaping dynamics with multiple populations in low-rank recurrent networks
- Training spiking neural networks in the strong coupling regime
- A Mathematical Analysis of the Effects of Hebbian Learning Rules on the Dynamics and Structure of Discrete-Time Random Recurrent Neural Networks
Uses Software
This page was built for publication: A Geometrical Analysis of Global Stability in Trained Feedback Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5154170)