Stochastic optimization using a trust-region method and random models
From MaRDI portal
Abstract: In this paper, we propose and analyze a trust-region model-based algorithm for solving unconstrained stochastic optimization problems. Our framework utilizes random models of an objective function , obtained from stochastic observations of the function or its gradient. Our method also utilizes estimates of function values to gauge progress that is being made. The convergence analysis relies on requirements that these models and these estimates are sufficiently accurate with sufficiently high, but fixed, probability. Beyond these conditions, no assumptions are made on how these models and estimates are generated. Under these general conditions we show an almost sure global convergence of the method to a first order stationary point. In the second part of the paper, we present examples of generating sufficiently accurate random models under biased or unbiased noise assumptions. Lastly, we present some computational results showing the benefits of the proposed method compared to existing approaches that are based on sample averaging or stochastic gradients.
Recommendations
- A stochastic trust region method for unconstrained optimization problems
- Stochastic derivative-free optimization using a trust region framework
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Publication:4727236
- A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients
- scientific article; zbMATH DE number 5281595
- scientific article; zbMATH DE number 4055383
- scientific article; zbMATH DE number 1045614
- scientific article; zbMATH DE number 45868
- Stochastic variance reduced gradient methods using a trust-region-like scheme
Cites work
- scientific article; zbMATH DE number 2121076 (Why is no real title available?)
- A Stochastic Approximation Method
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Acceleration of Stochastic Approximation by Averaging
- Adaptive stochastic approximation by the simultaneous perturbation method
- Adaptive subgradient methods for online learning and stochastic optimization
- An optimal method for stochastic composite optimization
- Analysis of Sample-Path Optimization
- Benchmarking Derivative-Free Optimization Algorithms
- Convergence of trust-region methods based on probabilistic models
- Derivative-free optimization of expensive functions with computational error using weighted regression
- Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points
- Introduction to Derivative-Free Optimization
- Introduction to Stochastic Search and Optimization
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- On sampling rates in simulation-based recursions
- Optimization methods for large-scale machine learning
- Probability. Theory and examples.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic derivative-free optimization using a trust region framework
- The empirical behavior of sampling methods for stochastic programming
- UOBYQA: unconstrained optimization by quadratic approximation
- Variable-number sample-path optimization
Cited in
(60)- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
- A discussion on variational analysis in derivative-free optimization
- A trust region method for noisy unconstrained optimization
- A stochastic Levenberg-Marquardt method using random models with complexity results
- Fully stochastic trust-region sequential quadratic programming for equality-constrained optimization problems
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- Hessian averaging in stochastic Newton methods achieves superlinear convergence
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Linesearch Newton-CG methods for convex optimization with noise
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
- Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm
- A derivative-free trust-region algorithm for the optimization of functions smoothed via Gaussian convolution using adaptive multiple importance sampling
- A stochastic trust-region framework for policy optimization
- Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization
- SPT: A stochastic tunneling algorithm for global optimization
- A new nonmonotone adaptive trust region algorithm.
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm
- scientific article; zbMATH DE number 4099039 (Why is no real title available?)
- Stochastic derivative-free optimization using a trust region framework
- A non-monotone trust-region method with noisy oracles and additional sampling
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Newton-type methods for non-convex optimization under inexact Hessian information
- Subsampled first-order optimization methods with applications in imaging
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Coupled learning enabled stochastic programming with endogenous uncertainty
- Improving the flexibility and robustness of model-based derivative-free optimization solvers
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Riemannian Natural Gradient Methods
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints
- A stochastic trust region method for unconstrained optimization problems
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Worst case complexity bounds for linesearch-type derivative-free algorithms
- A zeroth order method for stochastic weakly convex optimization
- Derivative-free bound-constrained optimization for solving structured problems with surrogate models
- A fully stochastic second-order trust region method
- An investigation of stochastic trust-region based algorithms for finite-sum minimization
- Convergence of trust-region methods based on probabilistic models
- TREGO: a trust-region framework for efficient global optimization
- A trust-region algorithm with adaptive stochastic collocation for PDE optimization under uncertainty
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
- Expected complexity analysis of stochastic direct-search
- Globally Convergent Multilevel Training of Deep Residual Networks
- Iteratively sampling scheme for stochastic optimization with variable number sample path
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates
- Surrogate-based promising area search for Lipschitz continuous simulation optimization
- A stochastic line search method with expected complexity analysis
- A sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimization
- Convergence of Newton-MR under inexact Hessian information
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques
- The impact of noise on evaluation complexity: the deterministic trust-region case
- Stochastic average model methods
- Derivative-free optimization methods
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
This page was built for publication: Stochastic optimization using a trust-region method and random models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1646570)