Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
From MaRDI portal
Publication:2052165
DOI10.1016/j.jco.2021.101591zbMath1481.90287arXiv2005.04639OpenAlexW3184878065MaRDI QIDQ2052165
Publication date: 25 November 2021
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.04639
Related Items (6)
Inexact restoration for minimization with inexact evaluation both of the objective function and the constraints ⋮ Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound ⋮ Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques ⋮ The impact of noise on evaluation complexity: the deterministic trust-region case ⋮ Linesearch Newton-CG methods for convex optimization with noise ⋮ A stochastic first-order trust-region method with inexact restoration for finite-sum minimization
Cites Work
- Unnamed Item
- Unnamed Item
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Introductory lectures on convex optimization. A basic course.
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Stochastic optimization using a trust-region method and random models
- An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Random gradient-free minimization of convex functions
- Cubic regularization of Newton method and its global performance
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Trust Region Methods
- A Derivative-Free Trust-Region Algorithm for the Optimization of Functions Smoothed via Gaussian Convolution Using Adaptive Multiple Importance Sampling
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Black-Box Complexity of Local Minimization
- The Best Rank-1 Approximation of a Symmetric Tensor and Related Spherical Optimization Problems
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Worst-Case Examples for Lasserre’s Measure–Based Hierarchy for Polynomial Optimization on the Hypercube
- Inexact Objective Function Evaluations in a Trust-Region Algorithm for PDE-Constrained Optimization under Uncertainty
- A Stochastic Line Search Method with Expected Complexity Analysis
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
This page was built for publication: Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives