Smooth Optimization with Approximate Gradient

From MaRDI portal
Publication:3395010




Abstract: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.




Cited in
(41)


Describes a project that uses

Uses Software





This page was built for publication: Smooth Optimization with Approximate Gradient

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3395010)