On accelerating the regularized alternating least-squares algorithm for tensors

From MaRDI portal




Abstract: In this paper, we discuss the acceleration of the regularized alternating least square (RALS) algorithm for tensor approximation. We propose a fast iterative method using a Aitken-Stefensen like updates for the regularized algorithm. Through numerical experiments, the fast algorithm demonstrate a faster convergence rate for the accelerated version in comparison to both the standard and regularized alternating least squares algorithms. In addition, we analyze the global convergence based on the Kurdyka- Lojasiewicz inequality as well as show that the RALS algorithm has a linear local convergence rate.



Cites work







This page was built for publication: On accelerating the regularized alternating least-squares algorithm for tensors

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1744316)