Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
From MaRDI portal
Publication:5244400
DOI10.1137/18M1226282zbMath1427.90228arXiv1811.03831OpenAlexW2988898191WikidataQ126797329 ScholiaQ126797329MaRDI QIDQ5244400
No author found.
Publication date: 21 November 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1811.03831
Analysis of algorithms (68W40) Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Learning and adaptive systems in artificial intelligence (68T05) Numerical methods based on nonlinear programming (49M37)
Related Items
Inexact restoration for derivative-free expensive function minimization and applications, Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Model-Based Derivative-Free Methods for Convex-Constrained Optimization, Ritz-like values in steplength selections for stochastic gradient methods, Tensor Bernstein concentration inequalities with an application to sample estimators for high-order moments, A nonlinear conjugate gradient method using inexact first-order information, An adaptive regularization method in Banach spaces, Inexact restoration for minimization with inexact evaluation both of the objective function and the constraints, Inexact restoration with subsampled trust-region methods for finite-sum minimization, A note on solving nonlinear optimization problems in variable precision, A regularization method for constrained nonlinear least squares, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques, The evaluation complexity of finding high-order minimizers of nonconvex optimization, The impact of noise on evaluation complexity: the deterministic trust-region case, An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity, High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms, Inexact derivative-free optimization for bilevel learning, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, An adaptive high order method for finding third-order critical points of nonconvex optimization, Linesearch Newton-CG methods for convex optimization with noise
Cites Work
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Trust-region and other regularisations of linear least-squares problems
- Introductory lectures on convex optimization. A basic course.
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Sub-sampled Newton methods
- Newton-type methods for non-convex optimization under inexact Hessian information
- Cubic regularization of Newton method and its global performance
- On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Trust Region Methods
- Optimization Methods for Large-Scale Machine Learning
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Black-Box Complexity of Local Minimization
- Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization
- A Stochastic Levenberg--Marquardt Method Using Random Models with Complexity Results
- WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION
- A Stochastic Line Search Method with Expected Complexity Analysis
- An Introduction to Matrix Concentration Inequalities