Hybrid tensor decomposition in neural network compression
DOI10.1016/J.NEUNET.2020.09.006zbMATH Open1475.68325DBLPjournals/nn/WuWZDL20arXiv2006.15938OpenAlexW3037399553WikidataQ99724362 ScholiaQ99724362MaRDI QIDQ2057771FDOQ2057771
Authors: Bijiao Wu, Dingheng Wang, Guangshe Zhao, Lei Deng, Guoqi Li
Publication date: 7 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2006.15938
Recommendations
- CMD: controllable matrix decomposition with global optimization for deep neural network compression
- Towards compact neural networks via end-to-end training: a Bayesian tensor approach with automatic rank determination
- Survey of deep neural network model compression
- Tensor neural network models for tensor singular value decompositions
- Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 2 Applications and Future Perspectives
balanced structurehierarchical Tuckertensor-trainhybrid tensor decompositionneural network compression
Artificial neural networks and deep learning (68T07) Factorization of matrices (15A23) Multilinear algebra, tensor calculus (15A69)
Cites Work
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Tensor-train decomposition
- Hierarchical Singular Value Decomposition of Tensors
- Optimization problems in contracted tensor networks
- A new scheme for the tensor representation
- Algorithm 941: \texttt{htucker} -- a Matlab toolbox for tensors in hierarchical Tucker format
- Decompositions of a Higher-Order Tensor in Block Terms—Part II: Definitions and Uniqueness
- An introduction to hierarchical (\(\mathcal H\)-) rank and TT-rank of tensors with examples
- Preconditioned low-rank methods for high-dimensional elliptic PDE eigenvalue problems
- Regularized computation of approximate pseudoinverse of large matrices using low-rank tensor train decompositions
Cited In (6)
- Towards compact neural networks via end-to-end training: a Bayesian tensor approach with automatic rank determination
- Survey of deep neural network model compression
- A zeroing neural dynamics based acceleration optimization approach for optimizers in deep neural networks
- Title not available (Why is that?)
- CMD: controllable matrix decomposition with global optimization for deep neural network compression
- Online subspace learning and imputation by tensor-ring decomposition
Uses Software
This page was built for publication: Hybrid tensor decomposition in neural network compression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2057771)