Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations

From MaRDI portal
Publication:5113710

DOI10.1080/10556788.2019.1624747zbMATH Open1440.90092arXiv1807.00251OpenAlexW2811026747WikidataQ127744831 ScholiaQ127744831MaRDI QIDQ5113710FDOQ5113710

Joshua D. Griffin, Jennifer B. Erway, Roummel F. Marcia, Riadh Omheni

Publication date: 16 June 2020

Published in: Optimization Methods \& Software (Search for Journal in Brave)

Abstract: Machine learning (ML) problems are often posed as highly nonlinear and nonconvex unconstrained optimization problems. Methods for solving ML problems based on stochastic gradient descent are easily scaled for very large problems but may involve fine-tuning many hyper-parameters. Quasi-Newton approaches based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) update typically do not require manually tuning hyper-parameters but suffer from approximating a potentially indefinite Hessian with a positive-definite matrix. Hessian-free methods leverage the ability to perform Hessian-vector multiplication without needing the entire Hessian matrix, but each iteration's complexity is significantly greater than quasi-Newton methods. In this paper we propose an alternative approach for solving ML problems based on a quasi-Newton trust-region framework for solving large-scale optimization problems that allow for indefinite Hessian approximations. Numerical experiments on a standard testing data set show that with a fixed computational time budget, the proposed methods achieve better results than the traditional limited-memory BFGS and the Hessian-free methods.


Full work available at URL: https://arxiv.org/abs/1807.00251





Cites Work


Cited In (4)

Uses Software






This page was built for publication: Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5113710)