Hessian barrier algorithms for linearly constrained optimization problems

From MaRDI portal
Publication:5233100

DOI10.1137/18M1215682zbMATH Open1421.90164arXiv1809.09449OpenAlexW2970875067WikidataQ127324284 ScholiaQ127324284MaRDI QIDQ5233100FDOQ5233100


Authors: Immanuel M. Bomze, Panayotis Mertikopoulos, Werner Schachinger, Mathias Staudigl Edit this on Wikidata


Publication date: 16 September 2019

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Abstract: In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is mathcalO(1/kho) for some hoin(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.


Full work available at URL: https://arxiv.org/abs/1809.09449




Recommendations




Cites Work


Cited In (6)





This page was built for publication: Hessian barrier algorithms for linearly constrained optimization problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5233100)