On the Numerical Performance of Derivative-Free Optimization Methods Based on Finite-Difference Approximations

From MaRDI portal
Publication:6361024

DOI10.1080/10556788.2022.2121832zbMATH Open1515.90134arXiv2102.09762MaRDI QIDQ6361024FDOQ6361024


Authors: Hao-Jun Michael Shi, Melody Qiming Xuan, Figen Oztoprak, Jorge Nocedal Edit this on Wikidata


Publication date: 19 February 2021

Abstract: The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. It consists of computing gradients of a smoothed approximation of the objective function (and constraints), and employing them within established codes. These gradient approximations are calculated by finite differences, with a differencing interval determined by the noise level in the functions and a bound on the second or third derivatives. It is assumed that noise level is known or can be estimated by means of difference tables or sampling. The use of finite differences has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations and/or as impractical when the objective function contains noise. The test results presented in this paper suggest that such views should be re-examined and that the finite-difference approach has much to be recommended. The tests compared NEWUOA, DFO-LS and COBYLA against the finite-difference approach on three classes of problems: general unconstrained problems, nonlinear least squares, and general nonlinear programs with equality constraints.













This page was built for publication: On the Numerical Performance of Derivative-Free Optimization Methods Based on Finite-Difference Approximations

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6361024)