Stochastic optimization using a trust-region method and random models

From MaRDI portal
Revision as of 05:15, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1646570

DOI10.1007/S10107-017-1141-8zbMath1401.90136arXiv1504.04231OpenAlexW2963465983MaRDI QIDQ1646570

Y. Aharonov

Publication date: 25 June 2018

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: In this paper, we propose and analyze a trust-region model-based algorithm for solving unconstrained stochastic optimization problems. Our framework utilizes random models of an objective function $f(x)$, obtained from stochastic observations of the function or its gradient. Our method also utilizes estimates of function values to gauge progress that is being made. The convergence analysis relies on requirements that these models and these estimates are sufficiently accurate with sufficiently high, but fixed, probability. Beyond these conditions, no assumptions are made on how these models and estimates are generated. Under these general conditions we show an almost sure global convergence of the method to a first order stationary point. In the second part of the paper, we present examples of generating sufficiently accurate random models under biased or unbiased noise assumptions. Lastly, we present some computational results showing the benefits of the proposed method compared to existing approaches that are based on sample averaging or stochastic gradients.


Full work available at URL: https://arxiv.org/abs/1504.04231





Cites Work


Related Items (56)

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracyA Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradientsA new nonmonotone adaptive trust region algorithm.A discussion on variational analysis in derivative-free optimizationA fully stochastic second-order trust region methodStochastic derivative-free optimization using a trust region frameworkGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic modelsRobust optimization of noisy blackbox problems using the mesh adaptive direct search algorithmIteratively sampling scheme for stochastic optimization with variable number sample pathASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic OptimizationA Stochastic Levenberg--Marquardt Method Using Random Models with Complexity ResultsStochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic ModelsCoupled Learning Enabled Stochastic Programming with Endogenous UncertaintyA Stochastic Trust-Region Framework for Policy OptimizationScalable subspace methods for derivative-free nonlinear least-squares optimizationAn adaptive stochastic sequential quadratic programming with differentiable exact augmented LagrangiansConvergence analysis of a subsampled Levenberg-Marquardt algorithmInequality constrained stochastic nonlinear optimization via active-set sequential quadratic programmingA trust region method for noisy unconstrained optimizationAn adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraintsTREGO: a trust-region framework for efficient global optimizationGlobally Convergent Multilevel Training of Deep Residual NetworksConvergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity BoundConstrained stochastic blackbox optimization using a progressive barrier and probabilistic estimatesHessian averaging in stochastic Newton methods achieves superlinear convergenceTrust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniquesAdaptive sampling quasi-Newton methods for zeroth-order stochastic optimizationOpen Problem—Iterative Schemes for Stochastic Optimization: Convergence Statements and Limit TheoremsRiemannian Natural Gradient MethodsNewton-type methods for non-convex optimization under inexact Hessian informationThe impact of noise on evaluation complexity: the deterministic trust-region caseSurrogate-Based Promising Area Search for Lipschitz Continuous Simulation OptimizationConvergence of Newton-MR under Inexact Hessian InformationStochastic trust-region algorithm in random subspaces with convergence and expected complexity analysesA Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance SamplingWorst case complexity bounds for linesearch-type derivative-free algorithmsSPT: A stochastic tunneling algorithm for global optimizationStochastic mesh adaptive direct search for blackbox optimization using probabilistic estimatesA non-monotone trust-region method with noisy oracles and additional samplingAn investigation of stochastic trust-region based algorithms for finite-sum minimizationStochastic average model methodsFully stochastic trust-region sequential quadratic programming for equality-constrained optimization problemsStochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizingSubsampled first-order optimization methods with applications in imagingFirst- and second-order high probability complexity bounds for trust-region methods with noisy oraclesImproving the Flexibility and Robustness of Model-based Derivative-free Optimization SolversDerivative-free bound-constrained optimization for solving structured problems with surrogate modelsA sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimizationA Stochastic Line Search Method with Expected Complexity AnalysisAdaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivativesA zeroth order method for stochastic weakly convex optimizationDerivative-free optimization methodsExpected complexity analysis of stochastic direct-searchLinesearch Newton-CG methods for convex optimization with noiseSolving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure MinimizationA stochastic first-order trust-region method with inexact restoration for finite-sum minimization

Uses Software




This page was built for publication: Stochastic optimization using a trust-region method and random models