Smooth Optimization with Approximate Gradient

From MaRDI portal
Publication:3395010

DOI10.1137/060676386zbMATH Open1180.90378arXivmath/0512344OpenAlexW2121210949MaRDI QIDQ3395010FDOQ3395010


Authors: Alexandre d'Aspremont Edit this on Wikidata


Publication date: 20 August 2009

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.


Full work available at URL: https://arxiv.org/abs/math/0512344




Recommendations





Cited In (41)

Uses Software





This page was built for publication: Smooth Optimization with Approximate Gradient

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3395010)