Overcoming catastrophic forgetting in neural networks

From MaRDI portal
Publication:4646167

DOI10.1073/PNAS.1611835114zbMATH Open1404.92015arXiv1612.00796OpenAlexW2560647685WikidataQ37737121 ScholiaQ37737121MaRDI QIDQ4646167FDOQ4646167


Authors: James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwińska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell Edit this on Wikidata


Publication date: 11 January 2019

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Abstract: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.


Full work available at URL: https://arxiv.org/abs/1612.00796




Recommendations




Cited In (55)





This page was built for publication: Overcoming catastrophic forgetting in neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4646167)