Pages that link to "Item:Q1646566"
From MaRDI portal
The following pages link to Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566):
Displaying 43 items.
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization (Q1785005) (← links)
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization (Q2001208) (← links)
- An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity (Q2020598) (← links)
- Regional complexity analysis of algorithms for nonconvex smooth optimization (Q2020615) (← links)
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates (Q2028452) (← links)
- An accelerated directional derivative method for smooth stochastic convex optimization (Q2029381) (← links)
- Minimizing uniformly convex functions by cubic regularization of Newton method (Q2032037) (← links)
- A stochastic subspace approach to gradient-free optimization in high dimensions (Q2044475) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- Linesearch Newton-CG methods for convex optimization with noise (Q2084588) (← links)
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization (Q2111466) (← links)
- Smoothness parameter of power of Euclidean norm (Q2178876) (← links)
- A note on solving nonlinear optimization problems in variable precision (Q2191797) (← links)
- Newton-type methods for non-convex optimization under inexact Hessian information (Q2205970) (← links)
- Affine-invariant contracting-point methods for convex optimization (Q2687041) (← links)
- Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings (Q2687068) (← links)
- Adaptive Sampling Strategies for Stochastic Optimization (Q4562248) (← links)
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise (Q4997171) (← links)
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy (Q5034938) (← links)
- A fully stochastic second-order trust region method (Q5043844) (← links)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization (Q5076721) (← links)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models (Q5079553) (← links)
- Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions (Q5081786) (← links)
- Streaming Principal Component Analysis From Incomplete Data (Q5214169) (← links)
- A Stochastic Line Search Method with Expected Complexity Analysis (Q5215517) (← links)
- Derivative-free optimization methods (Q5230522) (← links)
- An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration (Q5231671) (← links)
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization (Q5244400) (← links)
- High-Order Optimization Methods for Fully Composite Problems (Q5869820) (← links)
- Scalable subspace methods for derivative-free nonlinear least-squares optimization (Q6038650) (← links)
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians (Q6038658) (← links)
- Zeroth-order optimization with orthogonal random directions (Q6038668) (← links)
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming (Q6052061) (← links)
- A trust region method for noisy unconstrained optimization (Q6052069) (← links)
- Direct Search Based on Probabilistic Descent in Reduced Spaces (Q6071887) (← links)
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints (Q6072951) (← links)
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound (Q6116246) (← links)
- Adaptive sampling stochastic multigradient algorithm for stochastic multiobjective optimization (Q6142067) (← links)
- Global optimization using random embeddings (Q6160282) (← links)
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques (Q6170037) (← links)
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization (Q6175706) (← links)
- Gradient regularization of Newton method with Bregman distances (Q6201850) (← links)
- Derivative-free separable quadratic modeling and cubic regularization for unconstrained optimization (Q6489314) (← links)