Hierarchically structured task-agnostic continual learning
From MaRDI portal
Publication:6097166
DOI10.1007/S10994-022-06283-9arXiv2211.07725OpenAlexW4313421283MaRDI QIDQ6097166FDOQ6097166
Authors: Heinke Hihn, Daniel A. Braun
Publication date: 12 June 2023
Published in: Machine Learning (Search for Journal in Brave)
Abstract: One notable weakness of current machine learning algorithms is the poor ability of models to solve new problems without forgetting previously acquired knowledge. The Continual Learning paradigm has emerged as a protocol to systematically investigate settings where the model sequentially observes samples generated by a series of tasks. In this work, we take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle that facilitates a trade-off between learning and forgetting. We derive this principle from a Bayesian perspective and show its connections to previous approaches to continual learning. Based on this principle, we propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths through the network which is governed by a gating policy. Equipped with a diverse and specialized set of parameters, each path can be regarded as a distinct sub-network that learns to solve tasks. To improve expert allocation, we introduce diversity objectives, which we evaluate in additional ablation studies. Importantly, our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms. Due to the general formulation based on generic utility functions, we can apply this optimality principle to a large variety of learning problems, including supervised learning, reinforcement learning, and generative modeling. We demonstrate the competitive performance of our method on continual reinforcement learning and variants of the MNIST, CIFAR-10, and CIFAR-100 datasets.
Full work available at URL: https://arxiv.org/abs/2211.07725
Recommendations
- Task-agnostic continual learning using online variational Bayes with fixed-point updates
- Bio-inspired, task-free continual learning through activity regularization
- Overcoming catastrophic forgetting in neural networks
- Progressive learning: a deep learning framework for continual learning
- Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
Cites Work
- Bagging predictors
- Title not available (Why is that?)
- Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems
- Determinantal point processes for machine learning
- The coincidence approach to stochastic point processes
- Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy
- Combining Pattern Classifiers
- Overcoming catastrophic forgetting in neural networks
- Towards knowledgeable supervised lifelong learning systems
Cited In (4)
- Open-world continual learning: unifying novelty detection and continual learning
- Adversarial Feature Alignment: Avoid Catastrophic Forgetting in Incremental Task Lifelong Learning
- Bio-inspired, task-free continual learning through activity regularization
- A three-way decision approach for dynamically expandable networks
This page was built for publication: Hierarchically structured task-agnostic continual learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097166)