On accelerating the regularized alternating least-squares algorithm for tensors
From MaRDI portal
Abstract: In this paper, we discuss the acceleration of the regularized alternating least square (RALS) algorithm for tensor approximation. We propose a fast iterative method using a Aitken-Stefensen like updates for the regularized algorithm. Through numerical experiments, the fast algorithm demonstrate a faster convergence rate for the accelerated version in comparison to both the standard and regularized alternating least squares algorithms. In addition, we analyze the global convergence based on the Kurdyka- Lojasiewicz inequality as well as show that the RALS algorithm has a linear local convergence rate.
Recommendations
- Accelerating alternating least squares for tensor decomposition by pairwise perturbation
- Efficient alternating least squares algorithms for low multilinear rank approximation of tensors
- Some convergence results on the regularized alternating least-squares method for tensor decomposition
- On global convergence of alternating least squares for tensor approximation
- A self-adaptive regularized alternating least squares method for tensor decomposition problems
- A trust-region-based alternating least-squares algorithm for tensor decompositions
- A seminorm regularized alternating least squares algorithm for canonical tensor decomposition
- On the global convergence of the alternating least squares method for rank-one approximation to generic tensors
- A fast alternating least squares method for third-order tensors based on a compression procedure
Cites work
- scientific article; zbMATH DE number 1305633 (Why is no real title available?)
- scientific article; zbMATH DE number 3269388 (Why is no real title available?)
- scientific article; zbMATH DE number 3381785 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion
- Analysis of individual differences in multidimensional scaling via an \(n\)-way generalization of ``Eckart-Young decomposition
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Introductory lectures on convex optimization. A basic course.
- Local convergence of the alternating least squares algorithm for canonical tensor approximation
- Maximum block improvement and polynomial optimization
- On Convergence of the Maximum Block Improvement Method
- On local convergence of alternating schemes for optimization of convex problems in the tensor train format
- On the Convergence of Iterative Methods for Semidefinite Linear Systems
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Some applications of the Łojasiewicz gradient inequality
- Some convergence results on the regularized alternating least-squares method for tensor decomposition
- Tensor Decompositions and Applications
- Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem
- The Steffensen iteration method for systems of nonlinear equations
- The alternating linear scheme for tensor optimization in the tensor train format
Cited in
(6)- SOTT: greedy approximation of a tensor as a sum of tensor trains
- Acceleration of the alternating least squares algorithm for principal components analysis
- A fast alternating least squares method for third-order tensors based on a compression procedure
- Nesterov acceleration of alternating least squares for canonical tensor decomposition: momentum step size selection and restart mechanisms.
- Some convergence results on the regularized alternating least-squares method for tensor decomposition
- A self-adaptive regularized alternating least squares method for tensor decomposition problems
This page was built for publication: On accelerating the regularized alternating least-squares algorithm for tensors
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1744316)