Exploiting negative curvature in deterministic and stochastic optimization

From MaRDI portal
Publication:2425164

DOI10.1007/S10107-018-1335-8zbMATH Open1417.49036arXiv1703.00412OpenAlexW2963321060WikidataQ129134954 ScholiaQ129134954MaRDI QIDQ2425164FDOQ2425164

Frank E. Curtis, Daniel P. Robinson

Publication date: 26 June 2019

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: This paper addresses the question of whether it can be beneficial for an optimization algorithm to follow directions of negative curvature. Although prior work has established convergence results for algorithms that integrate both descent and negative curvature steps, there has not yet been extensive numerical evidence showing that such methods offer consistent performance improvements. In this paper, we present new frameworks for combining descent and negative curvature directions: alternating two-step approaches and dynamic step approaches. The aspect that distinguishes our approaches from ones previously proposed is that they make algorithmic decisions based on (estimated) upper-bounding models of the objective function. A consequence of this aspect is that our frameworks can, in theory, employ fixed stepsizes, which makes the methods readily translatable from deterministic to stochastic settings. For deterministic problems, we show that instances of our dynamic framework yield gains in performance compared to related methods that only follow descent steps. We also show that gains can be made in a stochastic setting in cases when a standard stochastic-gradient-type method might make slow progress.


Full work available at URL: https://arxiv.org/abs/1703.00412





Cites Work


Cited In (14)

Uses Software






This page was built for publication: Exploiting negative curvature in deterministic and stochastic optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2425164)