Overcoming catastrophic forgetting in neural networks

From MaRDI portal
Publication:4646167

DOI10.1073/pnas.1611835114zbMath1404.92015arXiv1612.00796OpenAlexW2560647685WikidataQ37737121 ScholiaQ37737121MaRDI QIDQ4646167

John Quan, Kieran Milan, Claudia Clopath, Andrei A. Rusu, Neil Rabinowitz, Raia Hadsell, Razvan Pascanu, Tiago Ramalho, Dharshan Kumaran, Demis Hassabis, Guillaume Desjardins, Agnieszka Grabska-Barwińska, James Kirkpatrick, Joel Veness

Publication date: 11 January 2019

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1612.00796




Related Items (34)

Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence RepresentationSequential changepoint detection in neural networks with checkpointsAn analytical theory of curriculum learning in teacher–student networks*The inverse variance–flatness relation in stochastic gradient descent is critical for finding flat minimaDrifting neuronal representations: bug or feature?Single Circuit in V1 Capable of Switching Contexts During Movement Using an Inhibitory Population as a SwitchModel-Centric Data Manifold: The Data Through the Eyes of the ModelBlessing of dimensionality at the edge and geometry of few-shot learningDeep Bayesian unsupervised lifelong learningContinuous learning of spiking networks trained with local rulesAccelerating algebraic multigrid methods via artificial neural networksAdaptive learning of effective dynamics for online modeling of complex systemsRobust federated learning under statistical heterogeneity via hessian-weighted aggregationHierarchically structured task-agnostic continual learningReliable extrapolation of deep neural operators informed by physics or sparse observationsKS(conf): a light-weight test if a multiclass classifier operates outside of its specificationsAutomated Deep Learning: Neural Architecture Search Is Not the EndAccelerating actor-critic-based algorithms via pseudo-labels derived from prior knowledgeQuantum continual learning of quantum data realizing knowledge backward transferA neural model of schemas and memory encodingToward Training Recurrent Neural Networks for Lifelong LearningDeep Reinforcement Learning: A State-of-the-Art WalkthroughProgressive learning: a deep learning framework for continual learningLearning deep optimizer for blind image deconvolutionA Minimum Free Energy Model of Motor LearningOne Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural NetworksAdversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong LearningBayesian Filtering with Multiple Internal Models: Toward a Theory of Social IntelligenceLearning Invariant Features in Modulatory Networks through Conflict and AmbiguityA neurodynamic model of the interaction between color perception and color memoryReinforcement Learning in Sparse-Reward Environments With Hindsight Policy GradientsUniversal statistics of Fisher information in deep neural networks: mean field approach*Unnamed ItemAdaptive infinite dropout for noisy and sparse data streams




This page was built for publication: Overcoming catastrophic forgetting in neural networks