Metric entropy limits on recurrent neural network learning of linear dynamical systems

From MaRDI portal




Abstract: One of the most influential results in neural network theory is the universal approximation theorem [1, 2, 3] which states that continuous functions can be approximated to within arbitrary accuracy by single-hidden-layer feedforward neural networks. The purpose of this paper is to establish a result in this spirit for the approximation of general discrete-time linear dynamical systems - including time-varying systems - by recurrent neural networks (RNNs). For the subclass of linear time-invariant (LTI) systems, we devise a quantitative version of this statement. Specifically, measuring the complexity of the considered class of LTI systems through metric entropy according to [4], we show that RNNs can optimally learn - or identify in system-theory parlance - stable LTI systems. For LTI systems whose input-output relation is characterized through a difference equation, this means that RNNs can learn the difference equation from input-output traces in a metric-entropy optimal manner.



Cites work



Describes a project that uses

Uses Software





This page was built for publication: Metric entropy limits on recurrent neural network learning of linear dynamical systems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2134114)