Inexact reduced gradient methods in nonconvex optimization

From MaRDI portal
Publication:6395673

arXiv2204.01806MaRDI QIDQ6395673FDOQ6395673

Dat Ba Tran, Pham Duy Khanh, Boris S. Mordukhovich

Publication date: 4 April 2022

Abstract: This paper proposes and develops new linesearch methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of linesearch methods are stablished. A general scheme for inexact reduced gradient (IRG) methods is proposed, where the errors in the gradient approximation automatically adapt with the magnitudes of the exact gradients. The sequences of iterations are shown to obtain stationary accumulation points when different stepsize selections are employed. Convergence results with constructive convergence rates for the developed IRG methods are established under the Kurdyka- Lojasiewicz property. The obtained results for the IRG methods are confirmed by encouraging numerical experiments, which demonstrate advantages of automatically controlled errors in IRG methods over other frequently used error selections.












This page was built for publication: Inexact reduced gradient methods in nonconvex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6395673)