Method for Convolutional Neural Network Hardware Implementation Based on a Residue Number System
From MaRDI portal
Publication:6486103
DOI10.1134/S0361768822080217zbMATH Open1517.68441MaRDI QIDQ6486103FDOQ6486103
Authors: M. V. Valueva, G. V. Valuev, Mikhail Babenko, Andrei Tchernykh, Jorge M. Cortés-Mendoza
Publication date: 5 January 2023
Published in: Programming and Computer Software (Search for Journal in Brave)
Recommendations
- Application of the residue number system to reduce hardware costs of the convolutional neural network implementation
- Convolution accelerator designs using fast algorithms
- A faster algorithm for reducing the computational complexity of convolutional neural networks
- An FPGA implementation of deep spiking neural networks for low-power and fast classification
- Parallelization of cellular neural networks on GPU
Learning and adaptive systems in artificial intelligence (68T05) Hardware implementations of nonnumerical algorithms (VLSI algorithms, etc.) (68W35)
Cited In (7)
- Application of the residue number system to reduce hardware costs of the convolutional neural network implementation
- Convolution accelerator designs using fast algorithms
- A faster algorithm for reducing the computational complexity of convolutional neural networks
- An analytic formulation of convolutional neural network learning for pattern recognition
- FPGA design and hardware implementation of a convolutional neural network for classification of saccadic eye movements
- A neural network accelerated optimization method for FPGA
- Accelerating CNN models for face verification with convolution theorem
This page was built for publication: Method for Convolutional Neural Network Hardware Implementation Based on a Residue Number System
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6486103)