Equivalence of equilibrium propagation and recurrent backpropagation

From MaRDI portal
Publication:3379592

DOI10.1162/NECO_A_01160zbMATH Open1474.68271arXiv1711.08416OpenAlexW2963953025WikidataQ90701059 ScholiaQ90701059MaRDI QIDQ3379592FDOQ3379592


Authors: Benjamin Scellier, Yoshua Bengio Edit this on Wikidata


Publication date: 27 September 2021

Published in: Neural Computation (Search for Journal in Brave)

Abstract: Recurrent Backpropagation and Equilibrium Propagation are supervised learning algorithms for fixed point recurrent neural networks which differ in their second phase. In the first phase, both algorithms converge to a fixed point which corresponds to the configuration where the prediction is made. In the second phase, Equilibrium Propagation relaxes to another nearby fixed point corresponding to smaller prediction error, whereas Recurrent Backpropagation uses a side network to compute error derivatives iteratively. In this work we establish a close connection between these two algorithms. We show that, at every moment in the second phase, the temporal derivatives of the neural activities in Equilibrium Propagation are equal to the error derivatives computed iteratively by Recurrent Backpropagation in the side network. This work shows that it is not required to have a side network for the computation of error derivatives, and supports the hypothesis that, in biological neural networks, temporal derivatives of neural activities may code for error signals.


Full work available at URL: https://arxiv.org/abs/1711.08416




Recommendations



Cites Work


Cited In (5)





This page was built for publication: Equivalence of equilibrium propagation and recurrent backpropagation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3379592)