Reservoirs learn to learn
From MaRDI portal
Publication:2231900
DOI10.1007/978-981-13-1687-6_3zbMATH Open1482.68206arXiv1909.07486OpenAlexW2973809955MaRDI QIDQ2231900FDOQ2231900
Authors: Anand Subramoney, Franz Scherr, Wolfgang Maass
Publication date: 30 September 2021
Abstract: We consider reservoirs in the form of liquid state machines, i.e., recurrently connected networks of spiking neurons with randomly chosen weights. So far only the weights of a linear readout were adapted for a specific task. We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly. After all, weights of recurrent connections in the brain are also not assumed to be randomly chosen. Rather, these weights were probably optimized during evolution, development, and prior learning experiences for specific task domains. In order to examine the benefits of choosing recurrent weights within a liquid with a purpose, we applied the Learning-to-Learn (L2L) paradigm to our model: We optimized the weights of the recurrent connections -- and hence the dynamics of the liquid state machine -- for a large family of potential learning tasks, which the network might have to learn later through modification of the weights of readout neurons. We found that this two-tiered process substantially improves the learning speed of liquid state machines for specific tasks. In fact, this learning speed increases further if one does not train the weights of linear readouts at all, and relies instead on the internal dynamics and fading memory of the network for remembering salient information that it could extract from preceding examples for the current learning task. This second type of learning has recently been proposed to underlie fast learning in the prefrontal cortex and motor cortex, and hence it is of interest to explore its performance also in models. Since liquid state machines share many properties with other types of reservoirs, our results raise the question whether L2L conveys similar benefits also to these other reservoirs.
Full work available at URL: https://arxiv.org/abs/1909.07486
Recommendations
- Structure optimization of reservoir networks
- An experimental unification of reservoir computing methods
- Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
- Reservoir computing with computational matter
- Reservoir computing approaches to recurrent neural network training
Learning and adaptive systems in artificial intelligence (68T05) Networks and circuits as models of computation; circuit complexity (68Q06)
Cites Work
Cited In (7)
- EO-MTRNN: evolutionary optimization of hyperparameters for a neuro-inspired computational model of spatiotemporal learning
- Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models
- Reservoir optimization in recurrent neural networks using properties of Kronecker product
- Collective dynamics of rate neurons for supervised learning in a reservoir computing system
- Multifunctionality in a reservoir computer
- Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
- A reservoir computing model of reward-modulated motor learning and automaticity
This page was built for publication: Reservoirs learn to learn
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2231900)