On the Application of Danskin's Theorem to Derivative-Free Minimax Optimization

From MaRDI portal
Publication:6301649

DOI10.1063/1.5089993arXiv1805.06322MaRDI QIDQ6301649FDOQ6301649


Authors: Abdullah al-Dujaili, S. Srikant, Erik Hemberg, Una-May O'Reilly Edit this on Wikidata


Publication date: 15 May 2018

Abstract: Motivated by Danskin's theorem, gradient-based methods have been applied with empirical success to solve minimax problems that involve non-convex outer minimization and non-concave inner maximization. On the other hand, recent work has demonstrated that Evolution Strategies (ES) algorithms are stochastic gradient approximators that seek robust solutions. In this paper, we address black-box (gradient-free) minimax problems that have long been tackled in a coevolutionary setup. To this end and guaranteed by Danskin's theorem, we employ ES as a stochastic estimator for the descent direction. The proposed approach is validated on a collection of black-box minimax problems. Based on our experiments, our method's performance is comparable with its coevolutionary counterparts and favorable for high-dimensional problems. Its efficacy is demonstrated on a real-world application.













This page was built for publication: On the Application of Danskin's Theorem to Derivative-Free Minimax Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301649)