Reservoirs learn to learn
From MaRDI portal
Publication:2231900
Abstract: We consider reservoirs in the form of liquid state machines, i.e., recurrently connected networks of spiking neurons with randomly chosen weights. So far only the weights of a linear readout were adapted for a specific task. We wondered whether the performance of liquid state machines can be improved if the recurrent weights are chosen with a purpose, rather than randomly. After all, weights of recurrent connections in the brain are also not assumed to be randomly chosen. Rather, these weights were probably optimized during evolution, development, and prior learning experiences for specific task domains. In order to examine the benefits of choosing recurrent weights within a liquid with a purpose, we applied the Learning-to-Learn (L2L) paradigm to our model: We optimized the weights of the recurrent connections -- and hence the dynamics of the liquid state machine -- for a large family of potential learning tasks, which the network might have to learn later through modification of the weights of readout neurons. We found that this two-tiered process substantially improves the learning speed of liquid state machines for specific tasks. In fact, this learning speed increases further if one does not train the weights of linear readouts at all, and relies instead on the internal dynamics and fading memory of the network for remembering salient information that it could extract from preceding examples for the current learning task. This second type of learning has recently been proposed to underlie fast learning in the prefrontal cortex and motor cortex, and hence it is of interest to explore its performance also in models. Since liquid state machines share many properties with other types of reservoirs, our results raise the question whether L2L conveys similar benefits also to these other reservoirs.
Recommendations
- Structure optimization of reservoir networks
- An experimental unification of reservoir computing methods
- Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
- Reservoir computing with computational matter
- Reservoir computing approaches to recurrent neural network training
Cites work
Cited in
(7)- Multifunctionality in a reservoir computer
- A reservoir computing model of reward-modulated motor learning and automaticity
- Collective dynamics of rate neurons for supervised learning in a reservoir computing system
- EO-MTRNN: evolutionary optimization of hyperparameters for a neuro-inspired computational model of spatiotemporal learning
- Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning
- Reservoir optimization in recurrent neural networks using properties of Kronecker product
- Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models
This page was built for publication: Reservoirs learn to learn
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2231900)