Memory and forecasting capacities of nonlinear recurrent networks
From MaRDI portal
Publication:2116285
Abstract: The notion of memory capacity, originally introduced for echo state and linear networks with independent inputs, is generalized to nonlinear recurrent networks with stationary but dependent inputs. The presence of dependence in the inputs makes natural the introduction of the network forecasting capacity, that measures the possibility of forecasting time series values using network states. Generic bounds for memory and forecasting capacities are formulated in terms of the number of neurons of the nonlinear recurrent network and the autocovariance function or the spectral density of the input. These bounds generalize well-known estimates in the literature to a dependent inputs setup. Finally, for the particular case of linear recurrent networks with independent inputs it is proved that the memory capacity is given by the rank of the associated controllability matrix, a fact that has been for a long time assumed to be true without proof by the community.
Recommendations
- Distributed sequence memory of multidimensional inputs in recurrent networks
- scientific article; zbMATH DE number 1843097
- scientific article; zbMATH DE number 2033224
- Programmed interactions in higher-order neural networks: Maximal capacity
- Elements for a general memory structure: properties of recurrent neural networks used to form situation models
Cites work
- scientific article; zbMATH DE number 1713116 (Why is no real title available?)
- scientific article; zbMATH DE number 65817 (Why is no real title available?)
- scientific article; zbMATH DE number 635657 (Why is no real title available?)
- scientific article; zbMATH DE number 1182386 (Why is no real title available?)
- scientific article; zbMATH DE number 6125590 (Why is no real title available?)
- scientific article; zbMATH DE number 7164779 (Why is no real title available?)
- A local echo state property through the largest Lyapunov exponent
- Approximating nonlinear fading-memory operators using neural network models
- Distributed sequence memory of multidimensional inputs in recurrent networks
- Dynamical systems as temporal feature spaces
- Echo state networks are universal
- Echo state property linked to an input: exploring a fundamental characteristic of recurrent neural networks
- Memory in linear recurrent neural networks in continuous time
- Nonlinear memory capacity of parallel time-delay reservoir computers in the processing of multidimensional signals
- On some properties of positive definite Toeplitz matrices and their possible applications
- Re-visiting the echo state property
- Short-term memory capacity in networks via the restricted isometry property
- Stochastic nonlinear time series forecasting using time-delay reservoir computers: performance and universality
- The asymptotic performance of linear echo state neural networks
- The identification of nonlinear discrete-time fading-memory systems using neural network models
- Toeplitz and circulant matrices: a review.
- Universal discrete-time reservoir computers with stochastic inputs and linear readouts using non-homogeneous state-affine systems
Cited in
(7)- Learn to synchronize, synchronize to learn
- Learning strange attractors with reservoir systems
- Dimension reduction in recurrent networks by canonicalization
- Short-term memory capacity in networks via the restricted isometry property
- Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations
- Distributed sequence memory of multidimensional inputs in recurrent networks
- Memory in linear recurrent neural networks in continuous time
This page was built for publication: Memory and forecasting capacities of nonlinear recurrent networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2116285)