Stochastic optimization using a trust-region method and random models
From MaRDI portal
Publication:1646570
DOI10.1007/S10107-017-1141-8zbMath1401.90136arXiv1504.04231OpenAlexW2963465983MaRDI QIDQ1646570
Publication date: 25 June 2018
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: In this paper, we propose and analyze a trust-region model-based algorithm for solving unconstrained stochastic optimization problems. Our framework utilizes random models of an objective function $f(x)$, obtained from stochastic observations of the function or its gradient. Our method also utilizes estimates of function values to gauge progress that is being made. The convergence analysis relies on requirements that these models and these estimates are sufficiently accurate with sufficiently high, but fixed, probability. Beyond these conditions, no assumptions are made on how these models and estimates are generated. Under these general conditions we show an almost sure global convergence of the method to a first order stationary point. In the second part of the paper, we present examples of generating sufficiently accurate random models under biased or unbiased noise assumptions. Lastly, we present some computational results showing the benefits of the proposed method compared to existing approaches that are based on sample averaging or stochastic gradients.
Full work available at URL: https://arxiv.org/abs/1504.04231
Nonlinear programming (90C30) Derivative-free methods and methods using generalized derivatives (90C56) Stochastic programming (90C15)
Cites Work
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stochastic derivative-free optimization using a trust region framework
- An optimal method for stochastic composite optimization
- Variable-number sample-path optimization
- UOBYQA: unconstrained optimization by quadratic approximation
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- The empirical behavior of sampling methods for stochastic programming
- Adaptive stochastic approximation by the simultaneous perturbation method
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Introduction to Derivative-Free Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Acceleration of Stochastic Approximation by Averaging
- Introduction to Stochastic Search and Optimization
- On Sampling Rates in Simulation-Based Recursions
- Optimization Methods for Large-Scale Machine Learning
- Analysis of Sample-Path Optimization
- Benchmarking Derivative-Free Optimization Algorithms
- Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points
- Derivative-Free Optimization of Expensive Functions with Computational Error Using Weighted Regression
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic Estimation of the Maximum of a Regression Function
- A Stochastic Approximation Method
- Probability
Related Items (56)
Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy ⋮ A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients ⋮ A new nonmonotone adaptive trust region algorithm. ⋮ A discussion on variational analysis in derivative-free optimization ⋮ A fully stochastic second-order trust region method ⋮ Stochastic derivative-free optimization using a trust region framework ⋮ Global convergence rate analysis of unconstrained optimization methods based on probabilistic models ⋮ Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm ⋮ Iteratively sampling scheme for stochastic optimization with variable number sample path ⋮ ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization ⋮ A Stochastic Levenberg--Marquardt Method Using Random Models with Complexity Results ⋮ Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models ⋮ Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty ⋮ A Stochastic Trust-Region Framework for Policy Optimization ⋮ Scalable subspace methods for derivative-free nonlinear least-squares optimization ⋮ An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians ⋮ Convergence analysis of a subsampled Levenberg-Marquardt algorithm ⋮ Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming ⋮ A trust region method for noisy unconstrained optimization ⋮ An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints ⋮ TREGO: a trust-region framework for efficient global optimization ⋮ Globally Convergent Multilevel Training of Deep Residual Networks ⋮ Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound ⋮ Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates ⋮ Hessian averaging in stochastic Newton methods achieves superlinear convergence ⋮ Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization ⋮ Open Problem—Iterative Schemes for Stochastic Optimization: Convergence Statements and Limit Theorems ⋮ Riemannian Natural Gradient Methods ⋮ Newton-type methods for non-convex optimization under inexact Hessian information ⋮ The impact of noise on evaluation complexity: the deterministic trust-region case ⋮ Surrogate-Based Promising Area Search for Lipschitz Continuous Simulation Optimization ⋮ Convergence of Newton-MR under Inexact Hessian Information ⋮ Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses ⋮ A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling ⋮ Worst case complexity bounds for linesearch-type derivative-free algorithms ⋮ SPT: A stochastic tunneling algorithm for global optimization ⋮ Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates ⋮ A non-monotone trust-region method with noisy oracles and additional sampling ⋮ An investigation of stochastic trust-region based algorithms for finite-sum minimization ⋮ Stochastic average model methods ⋮ Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems ⋮ Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing ⋮ Subsampled first-order optimization methods with applications in imaging ⋮ First- and second-order high probability complexity bounds for trust-region methods with noisy oracles ⋮ Improving the Flexibility and Robustness of Model-based Derivative-free Optimization Solvers ⋮ Derivative-free bound-constrained optimization for solving structured problems with surrogate models ⋮ A sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimization ⋮ A Stochastic Line Search Method with Expected Complexity Analysis ⋮ Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives ⋮ A zeroth order method for stochastic weakly convex optimization ⋮ Derivative-free optimization methods ⋮ Expected complexity analysis of stochastic direct-search ⋮ Linesearch Newton-CG methods for convex optimization with noise ⋮ Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization ⋮ A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
Uses Software
This page was built for publication: Stochastic optimization using a trust-region method and random models