An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
From MaRDI portal
Publication:6038658
Abstract: We consider solving nonlinear optimization problems with a stochastic objective and deterministic equality constraints. We assume for the objective that its evaluation, gradient, and Hessian are inaccessible, while one can compute their stochastic estimates by, for example, subsampling. We propose a stochastic algorithm based on sequential quadratic programming (SQP) that uses a differentiable exact augmented Lagrangian as the merit function. To motivate our algorithm design, we first revisit and simplify an old SQP method citep{Lucidi1990Recursive} developed for solving deterministic problems, which serves as the skeleton of our stochastic algorithm. Based on the simplified deterministic algorithm, we then propose a non-adaptive SQP for dealing with stochastic objective, where the gradient and Hessian are replaced by stochastic estimates but the stepsizes are deterministic and prespecified. Finally, we incorporate a recent stochastic line search procedure citep{Paquette2020Stochastic} into the non-adaptive stochastic SQP to adaptively select the random stepsizes, which leads to an adaptive stochastic SQP. The global "almost sure" convergence for both non-adaptive and adaptive SQP methods is established. Numerical experiments on nonlinear problems in CUTEst test set demonstrate the superiority of the adaptive algorithm.
Recommendations
- Sequential quadratic optimization for nonlinear equality constrained stochastic optimization
- An active set SQP algorithm for a class of stochastic nonlinear programming
- A Sequential Quadratic Programming Algorithm Using an Incomplete Solution of the Subproblem
- scientific article; zbMATH DE number 3887439
- A stabilized SQP method: global convergence
Cites work
- scientific article; zbMATH DE number 3878686 (Why is no real title available?)
- scientific article; zbMATH DE number 3914081 (Why is no real title available?)
- scientific article; zbMATH DE number 3696832 (Why is no real title available?)
- scientific article; zbMATH DE number 778133 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A New Class of Augmented Lagrangians in Nonlinear Programming
- A stochastic line search method with expected complexity analysis
- Adaptive sampling strategies for stochastic optimization
- An introduction to matrix concentration inequalities
- An investigation of Newton-sketch and subsampled Newton methods
- Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Complexity and global rates of trust-region methods based on probabilistic models
- Contributions to the theory of stochastic programming
- Convergence of trust-region methods based on probabilistic models
- Estimation of the parameters of linear time series models subject to nonlinear restrictions
- Exact and inexact subsampled Newton methods for optimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Hybrid deterministic-stochastic methods for data fitting
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- Line search methods with variable sample size for unconstrained optimization
- On the asymptotics of constrained local M-estimators.
- Optimization methods for large-scale machine learning
- Probability
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Recursive quadratic programming algorithm that uses an exact augmented Lagrangian function
- Robust Stochastic Approximation Approach to Stochastic Programming
- SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
- Sample size selection in optimization methods for machine learning
- Sequential quadratic optimization for nonlinear equality constrained stochastic optimization
- Sequential quadratic programming methods
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Stochastic optimization using a trust-region method and random models
- Stochastic processes. Theory for applications.
- Sub-sampled Newton methods
- User-friendly tail bounds for sums of random matrices
Cited in
(12)- Worst-case complexity of an SQP method for nonlinear equality constrained stochastic optimization
- Provably training overparameterized neural network classifiers with non-convex constraints
- Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems
- A digital economy development index based on an improved hierarchical data envelopment analysis approach
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- An active set SQP algorithm for a class of stochastic nonlinear programming
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming
- Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction
- Sequential quadratic optimization for nonlinear equality constrained stochastic optimization
- Sequential quadratic optimization for stochastic optimization with deterministic nonlinear inequality and equality constraints
- Further results on implicit models with application to LQ adaptive optimization
- A sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimization
This page was built for publication: An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6038658)