Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

From MaRDI portal
Publication:1646566

DOI10.1007/s10107-017-1137-4zbMath1407.90307arXiv1505.06070OpenAlexW2137633966MaRDI QIDQ1646566

Katya Scheinberg, Coralia Cartis

Publication date: 25 June 2018

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1505.06070



Related Items

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Streaming Principal Component Analysis From Incomplete Data, A fully stochastic second-order trust region method, Adaptive Sampling Strategies for Stochastic Optimization, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models, Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions, Smoothness parameter of power of Euclidean norm, Scalable subspace methods for derivative-free nonlinear least-squares optimization, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, Zeroth-order optimization with orthogonal random directions, Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming, A trust region method for noisy unconstrained optimization, Direct Search Based on Probabilistic Descent in Reduced Spaces, An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints, A note on solving nonlinear optimization problems in variable precision, Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound, Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization, Global optimization using random embeddings, Affine-invariant contracting-point methods for convex optimization, Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings, Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques, Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization, Gradient regularization of Newton method with Bregman distances, Newton-type methods for non-convex optimization under inexact Hessian information, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity, Regional complexity analysis of algorithms for nonconvex smooth optimization, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates, An accelerated directional derivative method for smooth stochastic convex optimization, Minimizing uniformly convex functions by cubic regularization of Newton method, A stochastic subspace approach to gradient-free optimization in high dimensions, A Stochastic Line Search Method with Expected Complexity Analysis, Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives, Derivative-free optimization methods, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise, Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization, Linesearch Newton-CG methods for convex optimization with noise, High-Order Optimization Methods for Fully Composite Problems, A stochastic first-order trust-region method with inexact restoration for finite-sum minimization



Cites Work