Inexact restoration with subsampled trust-region methods for finite-sum minimization
DOI10.1007/s10589-020-00196-wzbMath1445.90102arXiv1902.01710OpenAlexW3034723207WikidataQ113107254 ScholiaQ113107254MaRDI QIDQ2191786
Stefania Bellavia, Benedetta Morini, Nataša Krejić
Publication date: 26 June 2020
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1902.01710
subsamplingtrust-region methodsinexact restorationlocal and global convergenceworst-case evaluation complexity
Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30)
Related Items (6)
Uses Software
Cites Work
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Sample size selection in optimization methods for machine learning
- An adaptive Monte Carlo algorithm for computing mixed logit estimators
- Efficient sample sizes in stochastic nonlinear programming
- Variable-number sample-path optimization
- Sub-sampled Newton methods
- Line search methods with variable sample size for unconstrained optimization
- Inexact-restoration algorithm for constrained optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- Nonmonotone line search methods with variable sample size
- Convergence theory for nonconvex stochastic programming with an application to mixed logit
- Inexact Restoration approach for minimization with inexact evaluation of the objective function
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On Choosing Parameters in Retrospective-Approximation Algorithms for Stochastic Root Finding and Simulation Optimization
- Numerical Optimization
- Trust Region Methods
- On the employment of inexact restoration for the minimization of functions whose evaluation is subject to errors
- Optimization Methods for Large-Scale Machine Learning
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- An investigation of Newton-Sketch and subsampled Newton methods
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
- Exact and inexact subsampled Newton methods for optimization
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Inexact-restoration method with Lagrangian tangent decrease and new merit function for nonlinear programming.
This page was built for publication: Inexact restoration with subsampled trust-region methods for finite-sum minimization