A brain-inspired algorithm for training highly sparse neural networks
From MaRDI portal
Publication:6097119
DOI10.1007/S10994-022-06266-WarXiv1903.07138OpenAlexW3208631787MaRDI QIDQ6097119FDOQ6097119
Authors: Zahra Atashgahi, Joost Pieterse, Shiwei Liu, Decebal Constantin Mocanu, Raymond Veldhuis, Mykola Pechenizkiy
Publication date: 12 June 2023
Published in: Machine Learning (Search for Journal in Brave)
Abstract: Sparse neural networks attract increasing interest as they exhibit comparable performance to their dense counterparts while being computationally efficient. Pruning the dense neural networks is among the most widely used methods to obtain a sparse neural network. Driven by the high training cost of such methods that can be unaffordable for a low-resource device, training sparse neural networks sparsely from scratch has recently gained attention. However, existing sparse training algorithms suffer from various issues, including poor performance in high sparsity scenarios, computing dense gradient information during training, or pure random topology search. In this paper, inspired by the evolution of the biological brain and the Hebbian learning theory, we present a new sparse training approach that evolves sparse neural networks according to the behavior of neurons in the network. Concretely, by exploiting the cosine similarity metric to measure the importance of the connections, our proposed method, Cosine similarity-based and Random Topology Exploration (CTRE), evolves the topology of sparse neural networks by adding the most important connections to the network without calculating dense gradient in the backward. We carried out different experiments on eight datasets, including tabular, image, and text datasets, and demonstrate that our proposed method outperforms several state-of-the-art sparse training algorithms in extremely sparse neural networks by a large gap. The implementation code is available on https://github.com/zahraatashgahi/CTRE
Full work available at URL: https://arxiv.org/abs/1903.07138
Recommendations
- Joint structure and parameter optimization of multiobjective sparse neural network
- Make \(\ell_1\) regularization effective in training sparse CNN
- Sensitivity-informed provable pruning of neural networks
- Sparsity through evolutionary pruning prevents neuronal networks from overfitting
- Transformed \(\ell_1\) regularization for learning sparse deep neural networks
Cites Work
- NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm
- A topological insight into restricted Boltzmann machines
- Feature extraction. Foundations and applications. Papers from NIPS 2003 workshop on feature extraction, Whistler, BC, Canada, December 11--13, 2003. With CD-ROM.
- Title not available (Why is that?)
- Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
- Learning similarity with cosine similarity ensemble
Cited In (1)
This page was built for publication: A brain-inspired algorithm for training highly sparse neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097119)