An implicit gradient-descent procedure for minimax problems

From MaRDI portal
Publication:6319786

DOI10.1007/S00186-022-00805-WarXiv1906.00233MaRDI QIDQ6319786FDOQ6319786


Authors: Montacer Essid, Esteban G. Tabak, Giulio Trigila Edit this on Wikidata


Publication date: 1 June 2019

Abstract: A game theory inspired methodology is proposed for finding a function's saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player's optimal strategy into account. The implicit scheme proposed has an adaptive learning rate that makes it transition to Newton's method in the neighborhood of saddle points. Convergence is shown through local analysis and, in non convex-concave settings, thorough numerical examples in optimal transport and linear programming. An ad-hoc quasi Newton method is developed for high dimensional problems, for which the inversion of the Hessian of the objective function may entail a high computational cost.













This page was built for publication: An implicit gradient-descent procedure for minimax problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6319786)