Overcoming catastrophic forgetting in neural networks
DOI10.1073/PNAS.1611835114zbMATH Open1404.92015arXiv1612.00796OpenAlexW2560647685WikidataQ37737121 ScholiaQ37737121MaRDI QIDQ4646167FDOQ4646167
Authors: James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwińska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell
Publication date: 11 January 2019
Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1612.00796
Recommendations
- Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution
- scientific article; zbMATH DE number 1928798
- Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
- A comprehensive study of class incremental learning algorithms for visual tasks
- Toward training recurrent neural networks for lifelong learning
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cited In (55)
- An algorithm for learning representations of models with scarce data
- Open-world continual learning: unifying novelty detection and continual learning
- The role of diversity and ensemble learning in credit card fraud detection
- Artificial neural variability for deep learning: on overfitting, noise memorization, and catastrophic forgetting
- Replay in deep learning: current approaches and missing biological elements
- Progressive learning: a deep learning framework for continual learning
- Adaptive infinite dropout for noisy and sparse data streams
- Model-Centric Data Manifold: The Data Through the Eyes of the Model
- Distributed Bayesian learning with stochastic natural gradient expectation propagation and the posterior server
- Hierarchically structured task-agnostic continual learning
- A neurosymbolic cognitive architecture framework for handling novelties in open worlds
- Stimulus-Driven and Spontaneous Dynamics in Excitatory-Inhibitory Recurrent Neural Networks for Sequence Representation
- Universal statistics of Fisher information in deep neural networks: mean field approach*
- Task-agnostic continual learning using online variational Bayes with fixed-point updates
- Quantum continual learning of quantum data realizing knowledge backward transfer
- Learning deep optimizer for blind image deconvolution
- Class incremental learning with KL constraint and multi-strategy exemplar selection for classification based on MMFA model
- Sequential changepoint detection in neural networks with checkpoints
- Robust federated learning under statistical heterogeneity via hessian-weighted aggregation
- A minimum free energy model of motor learning
- Bayesian filtering with multiple internal models: toward a theory of social intelligence
- Dynamic Consolidation for Continual Learning
- Deep Reinforcement Learning: A State-of-the-Art Walkthrough
- Single circuit in V1 capable of switching contexts during movement using an inhibitory population as a switch
- Learning invariant features in modulatory networks through conflict and ambiguity
- Blessing of dimensionality at the edge and geometry of few-shot learning
- Exact learning dynamics of deep linear networks with prior knowledge
- The inverse variance-flatness relation in stochastic gradient descent is critical for finding flat minima
- Gated Orthogonal Recurrent Units: On Learning to Forget
- Continuous learning of spiking networks trained with local rules
- Toward training recurrent neural networks for lifelong learning
- Deep Bayesian unsupervised lifelong learning
- A comprehensive study of class incremental learning algorithms for visual tasks
- Drifting neuronal representations: bug or feature?
- A neurodynamic model of the interaction between color perception and color memory
- Accelerating algebraic multigrid methods via artificial neural networks
- Leveraging viscous Hamilton-Jacobi PDEs for uncertainty quantification in scientific machine learning
- One step back, two steps forward: interference and learning in recurrent neural networks
- Title not available (Why is that?)
- Lifelong deep learning-based control of robot manipulators
- An analytical theory of curriculum learning in teacher–student networks*
- Reliable extrapolation of deep neural operators informed by physics or sparse observations
- Reinforcement learning in sparse-reward environments with hindsight policy gradients
- A neural model of schemas and memory encoding
- Title not available (Why is that?)
- KS(conf): a light-weight test if a multiclass classifier operates outside of its specifications
- Adaptive learning of effective dynamics for online modeling of complex systems
- Exact learning dynamics of deep linear networks with prior knowledge
- Dynamic neural Turing machine with continuous and discrete addressing schemes
- Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
- Bio-inspired, task-free continual learning through activity regularization
- Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution
- Automated Deep Learning: Neural Architecture Search Is Not the End
- Accelerating actor-critic-based algorithms via pseudo-labels derived from prior knowledge
- A three-way decision approach for dynamically expandable networks
This page was built for publication: Overcoming catastrophic forgetting in neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4646167)