Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints

From MaRDI portal
Publication:6372255

arXiv2107.03512MaRDI QIDQ6372255FDOQ6372255


Authors: Frank E. Curtis, Daniel P. Robinson, Baoyu Zhou Edit this on Wikidata


Publication date: 7 July 2021

Abstract: An algorithm is proposed, analyzed, and tested experimentally for solving stochastic optimization problems in which the decision variables are constrained to satisfy equations defined by deterministic, smooth, and nonlinear functions. It is assumed that constraint function and derivative values can be computed, but that only stochastic approximations are available for the objective function and its derivatives. The algorithm is of the sequential quadratic optimization variety. A distinguishing feature of the algorithm is that it allows inexact subproblem solutions to be employed, which is particularly useful in large-scale settings when the matrices defining the subproblems are too large to form and/or factorize. Conditions are imposed on the inexact subproblem solutions that account for the fact that only stochastic objective gradient estimates are available. Convergence results in expectation are established for the method. Numerical experiments show that it outperforms an alternative algorithm that employs highly accurate subproblem solutions in every iteration.




Has companion code repository: https://github.com/frankecurtis/StochasticSQP









This page was built for publication: Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6372255)