Stochastic gradients for large-scale tensor decomposition
From MaRDI portal
Publication:5037555
Abstract: Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. The stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor-times-Khatri-Rao product (MTTKRP) tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.
Recommendations
- Generalized canonical polyadic tensor decomposition
- MuLOT: multi-level optimization of the canonical polyadic tensor decomposition at large-scale
- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- A Practical Randomized CP Tensor Decomposition
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
Cites work
- A Limited Memory Algorithm for Bound Constrained Optimization
- A Practical Randomized CP Tensor Decomposition
- A Scalable Generative Graph Model with Community Structure
- Adaptive Algorithms to Track the PARAFAC Decomposition of a Third-Order Tensor
- Algorithm 862
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Asymptotic performance of PCA for high-dimensional heteroscedastic data
- Completing any low-rank matrix, provably
- Efficient MATLAB Computations with Sparse and Factored Tensors
- Generalized canonical polyadic tensor decomposition
- Generalized low rank models
- Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
- On Tensors, Sparsity, and Nonnegative Factorizations
- Positive tensor factorization
- Randomized Algorithms for Matrices and Data
- Sketching as a tool for numerical linear algebra
- Software for Sparse Tensor Decomposition on Emerging Computing Architectures
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors
- Tensor Decompositions and Applications
Cited in
(13)- A block-randomized stochastic method with importance sampling for CP tensor decomposition
- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems
- Exploiting Efficient Representations in Large-Scale Tensor Decompositions
- An AO-ADMM Approach to Constraining PARAFAC2 on All Modes
- Parallel stochastic gradient algorithms for large-scale matrix completion
- Performance of the low-rank TT-SVD for large dense tensors on modern multicore CPUs
- Provable stochastic algorithm for large-scale fully-connected tensor network decomposition
- Scalable symmetric Tucker tensor decomposition
- Practical leverage-based sampling for low-rank tensor decomposition
- Computing the gradient in optimization algorithms for the CP decomposition in constant memory through tensor blocking
- Generalized canonical polyadic tensor decomposition
- On large-scale dynamic topic modeling with nonnegative CP tensor decomposition
Describes a project that uses
Uses Software
This page was built for publication: Stochastic gradients for large-scale tensor decomposition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037555)