A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
DOI10.1007/S10589-022-00430-7OpenAlexW3179487015MaRDI QIDQ2111466FDOQ2111466
Authors: Stefania Bellavia, Benedetta Morini, Nataša Krejić, Simone Rebegoldi
Publication date: 16 January 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.03129
Recommendations
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- A stochastic trust region method for unconstrained optimization problems
- A fully stochastic second-order trust region method
- Inexact trust-region methods for PDE-constrained optimization
- A First-Order Convergence Analysis of Trust-Region Methods with Inexact Jacobians
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- A first-order convergence analysis of trust-region methods with inexact Jacobians and inequality constraints
- A Trust-region Method for Nonsmooth Nonconvex Optimization
- A recursive trust-region method for non-convex constrained minimization
- A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients
subsamplingtrust-region methodsworst-case iteration complexityinexact restorationfinite-sum minimization
Nonlinear programming (90C30) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Stochastic optimization using a trust-region method and random models
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Inexact-restoration algorithm for constrained optimization
- Minimizing finite sums with the stochastic average gradient
- A Stochastic Line Search Method with Expected Complexity Analysis
- Inexact restoration approach for minimization with inexact evaluation of the objective function
- Nonlinear programming
- Optimization Methods for Large-Scale Machine Learning
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Inexact restoration with subsampled trust-region methods for finite-sum minimization
- On the employment of inexact restoration for the minimization of functions whose evaluation is subject to errors
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques
Cited In (5)
- Special issue for SIMAI 2020-2021: large-scale optimization and applications
- A non-monotone trust-region method with noisy oracles and additional sampling
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
- Inexact restoration for minimization with inexact evaluation both of the objective function and the constraints
- An investigation of stochastic trust-region based algorithms for finite-sum minimization
Uses Software
This page was built for publication: A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2111466)