Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
From MaRDI portal
Publication:6437408
arXiv2305.12568MaRDI QIDQ6437408FDOQ6437408
Authors: Hanmin Li, Avetik Karagulyan, Peter Richtárik
Publication date: 21 May 2023
Abstract: This paper introduces a new method for minimizing matrix-smooth non-convex objectives through the use of novel Compressed Gradient Descent (CGD) algorithms enhanced with a matrix-valued stepsize. The proposed algorithms are theoretically analyzed first in the single-node and subsequently in the distributed settings. Our theoretical results reveal that the matrix stepsize in CGD can capture the objective's structure and lead to faster convergence compared to a scalar stepsize. As a byproduct of our general results, we emphasize the importance of selecting the compression mechanism and the matrix stepsize in a layer-wise manner, taking advantage of model structure. Moreover, we provide theoretical guarantees for free compression, by designing specific layer-wise compressors for the non-convex matrix smooth objectives. Our findings are supported with empirical evidence.
This page was built for publication: Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6437408)