Stochastic optimization using a trust-region method and random models
From MaRDI portal
Abstract: In this paper, we propose and analyze a trust-region model-based algorithm for solving unconstrained stochastic optimization problems. Our framework utilizes random models of an objective function , obtained from stochastic observations of the function or its gradient. Our method also utilizes estimates of function values to gauge progress that is being made. The convergence analysis relies on requirements that these models and these estimates are sufficiently accurate with sufficiently high, but fixed, probability. Beyond these conditions, no assumptions are made on how these models and estimates are generated. Under these general conditions we show an almost sure global convergence of the method to a first order stationary point. In the second part of the paper, we present examples of generating sufficiently accurate random models under biased or unbiased noise assumptions. Lastly, we present some computational results showing the benefits of the proposed method compared to existing approaches that are based on sample averaging or stochastic gradients.
Recommendations
- A stochastic trust region method for unconstrained optimization problems
- Stochastic derivative-free optimization using a trust region framework
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Publication:4727236
- A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients
- scientific article; zbMATH DE number 5281595
- scientific article; zbMATH DE number 4055383
- scientific article; zbMATH DE number 1045614
- scientific article; zbMATH DE number 45868
- Stochastic variance reduced gradient methods using a trust-region-like scheme
Cites work
- scientific article; zbMATH DE number 2121076 (Why is no real title available?)
- A Stochastic Approximation Method
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Acceleration of Stochastic Approximation by Averaging
- Adaptive stochastic approximation by the simultaneous perturbation method
- Adaptive subgradient methods for online learning and stochastic optimization
- An optimal method for stochastic composite optimization
- Analysis of Sample-Path Optimization
- Benchmarking Derivative-Free Optimization Algorithms
- Convergence of trust-region methods based on probabilistic models
- Derivative-free optimization of expensive functions with computational error using weighted regression
- Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points
- Introduction to Derivative-Free Optimization
- Introduction to Stochastic Search and Optimization
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- On sampling rates in simulation-based recursions
- Optimization methods for large-scale machine learning
- Probability. Theory and examples.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic derivative-free optimization using a trust region framework
- The empirical behavior of sampling methods for stochastic programming
- UOBYQA: unconstrained optimization by quadratic approximation
- Variable-number sample-path optimization
Cited in
(60)- Iteratively sampling scheme for stochastic optimization with variable number sample path
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques
- Newton-type methods for non-convex optimization under inexact Hessian information
- A stochastic Levenberg-Marquardt method using random models with complexity results
- Convergence of Newton-MR under inexact Hessian information
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- A stochastic trust region method for unconstrained optimization problems
- A discussion on variational analysis in derivative-free optimization
- Surrogate-based promising area search for Lipschitz continuous simulation optimization
- Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization
- A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty
- Derivative-free optimization methods
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates
- Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- A stochastic trust-region framework for policy optimization
- A zeroth order method for stochastic weakly convex optimization
- TREGO: a trust-region framework for efficient global optimization
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Improving the flexibility and robustness of model-based derivative-free optimization solvers
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Linesearch Newton-CG methods for convex optimization with noise
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- A fully stochastic second-order trust region method
- Stochastic derivative-free optimization using a trust region framework
- Coupled learning enabled stochastic programming with endogenous uncertainty
- The impact of noise on evaluation complexity: the deterministic trust-region case
- A new nonmonotone adaptive trust region algorithm.
- Convergence of trust-region methods based on probabilistic models
- A trust region method for noisy unconstrained optimization
- scientific article; zbMATH DE number 4099039 (Why is no real title available?)
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
- Expected complexity analysis of stochastic direct-search
- SPT: A stochastic tunneling algorithm for global optimization
- Globally Convergent Multilevel Training of Deep Residual Networks
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- A stochastic line search method with expected complexity analysis
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming
- Riemannian Natural Gradient Methods
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- Subsampled first-order optimization methods with applications in imaging
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- An investigation of stochastic trust-region based algorithms for finite-sum minimization
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
- Worst case complexity bounds for linesearch-type derivative-free algorithms
- Stochastic average model methods
- Derivative-free bound-constrained optimization for solving structured problems with surrogate models
- A sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimization
- A non-monotone trust-region method with noisy oracles and additional sampling
- Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
This page was built for publication: Stochastic optimization using a trust-region method and random models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1646570)