An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
From MaRDI portal
Publication:6038658
DOI10.1007/s10107-022-01846-zzbMath1518.90057arXiv2102.05320OpenAlexW3126903296MaRDI QIDQ6038658
Mihai Anitescu, Sen Na, Mladen Kolar
Publication date: 2 May 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2102.05320
Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Stochastic programming (90C15) Methods of successive quadratic programming type (90C55)
Related Items (4)
Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming ⋮ Hessian averaging in stochastic Newton methods achieves superlinear convergence ⋮ Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction ⋮ Provably training overparameterized neural network classifiers with non-convex constraints
Cites Work
- User-friendly tail bounds for sums of random matrices
- Sample size selection in optimization methods for machine learning
- Estimation of the parameters of linear time series models subject to nonlinear restrictions
- Recursive quadratic programming algorithm that uses an exact augmented Lagrangian function
- Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic optimization using a trust-region method and random models
- Sub-sampled Newton methods
- On the asymptotics of constrained local \(M\)-estimators.
- Line search methods with variable sample size for unconstrained optimization
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Sequential Quadratic Programming Methods
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Robust Stochastic Approximation Approach to Stochastic Programming
- A New Class of Augmented Lagrangians in Nonlinear Programming
- Probability
- Complexity and global rates of trust-region methods based on probabilistic models
- Adaptive Sampling Strategies for Stochastic Optimization
- Optimization Methods for Large-Scale Machine Learning
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
- An investigation of Newton-Sketch and subsampled Newton methods
- Contributions to the theory of stochastic programming
- A Stochastic Line Search Method with Expected Complexity Analysis
- SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization
- An Introduction to Matrix Concentration Inequalities
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Exact and inexact subsampled Newton methods for optimization
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians