Nonlinearly Preconditioned L-BFGS as an Acceleration Mechanism for Alternating Least Squares, with Application to Tensor Decomposition

From MaRDI portal
Publication:6299481

DOI10.1002/NLA.2202arXiv1803.08849WikidataQ129530237 ScholiaQ129530237MaRDI QIDQ6299481FDOQ6299481

H. De Sterck, Alexander Howse

Publication date: 23 March 2018

Abstract: We derive nonlinear acceleration methods based on the limited memory BFGS (L-BFGS) update formula for accelerating iterative optimization methods of alternating least squares (ALS) type applied to canonical polyadic (CP) and Tucker tensor decompositions. Our approach starts from linear preconditioning ideas that use linear transformations encoded by matrix multiplications, and extends these ideas to the case of genuinely nonlinear preconditioning, where the preconditioning operation involves fully nonlinear transformations. As such, the ALS-type iterations are used as fully nonlinear preconditioners for L-BFGS, or, equivalently, L-BFGS is used as a nonlinear accelerator for ALS. Numerical results show that the resulting methods perform much better than either stand-alone L-BFGS or stand-alone ALS, offering substantial improvements in terms of time-to-solution and robustness over state-of-the-art methods for large and noisy tensor problems, including previously described acceleration methods based on nonlinear conjugate gradients and nonlinear GMRES. Our approach provides a general L-BFGS-based acceleration mechanism for nonlinear optimization.













This page was built for publication: Nonlinearly Preconditioned L-BFGS as an Acceleration Mechanism for Alternating Least Squares, with Application to Tensor Decomposition

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6299481)