Memory and forecasting capacities of nonlinear recurrent networks

From MaRDI portal
Publication:2116285

DOI10.1016/J.PHYSD.2020.132721zbMATH Open1484.68185arXiv2004.11234OpenAlexW3018848489MaRDI QIDQ2116285FDOQ2116285


Authors: Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega Edit this on Wikidata


Publication date: 16 March 2022

Published in: Physica D (Search for Journal in Brave)

Abstract: The notion of memory capacity, originally introduced for echo state and linear networks with independent inputs, is generalized to nonlinear recurrent networks with stationary but dependent inputs. The presence of dependence in the inputs makes natural the introduction of the network forecasting capacity, that measures the possibility of forecasting time series values using network states. Generic bounds for memory and forecasting capacities are formulated in terms of the number of neurons of the nonlinear recurrent network and the autocovariance function or the spectral density of the input. These bounds generalize well-known estimates in the literature to a dependent inputs setup. Finally, for the particular case of linear recurrent networks with independent inputs it is proved that the memory capacity is given by the rank of the associated controllability matrix, a fact that has been for a long time assumed to be true without proof by the community.


Full work available at URL: https://arxiv.org/abs/2004.11234




Recommendations




Cites Work


Cited In (5)





This page was built for publication: Memory and forecasting capacities of nonlinear recurrent networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2116285)