Pages that link to "Item:Q1646570"
From MaRDI portal
The following pages link to Stochastic optimization using a trust-region method and random models (Q1646570):
Displaying 35 items.
- Stochastic derivative-free optimization using a trust region framework (Q301671) (← links)
- A discussion on variational analysis in derivative-free optimization (Q829491) (← links)
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients (Q1616028) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- Robust optimization of noisy blackbox problems using the mesh adaptive direct search algorithm (Q1653265) (← links)
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates (Q2028452) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- A zeroth order method for stochastic weakly convex optimization (Q2057220) (← links)
- Expected complexity analysis of stochastic direct-search (Q2070336) (← links)
- Linesearch Newton-CG methods for convex optimization with noise (Q2084588) (← links)
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization (Q2111466) (← links)
- A new nonmonotone adaptive trust region algorithm. (Q2128413) (← links)
- Iteratively sampling scheme for stochastic optimization with variable number sample path (Q2157906) (← links)
- Newton-type methods for non-convex optimization under inexact Hessian information (Q2205970) (← links)
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates (Q2687061) (← links)
- The impact of noise on evaluation complexity: the deterministic trust-region case (Q2696963) (← links)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models (Q5079553) (← links)
- Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty (Q5085157) (← links)
- A Stochastic Trust-Region Framework for Policy Optimization (Q5096136) (← links)
- Open Problem—Iterative Schemes for Stochastic Optimization: Convergence Statements and Limit Theorems (Q5113905) (← links)
- Surrogate-Based Promising Area Search for Lipschitz Continuous Simulation Optimization (Q5137952) (← links)
- Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization (Q5870366) (← links)
- Scalable subspace methods for derivative-free nonlinear least-squares optimization (Q6038650) (← links)
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians (Q6038658) (← links)
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm (Q6047687) (← links)
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming (Q6052061) (← links)
- A trust region method for noisy unconstrained optimization (Q6052069) (← links)
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints (Q6072951) (← links)
- TREGO: a trust-region framework for efficient global optimization (Q6102171) (← links)
- Globally Convergent Multilevel Training of Deep Residual Networks (Q6108152) (← links)
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound (Q6116246) (← links)
- Hessian averaging in stochastic Newton methods achieves superlinear convergence (Q6165593) (← links)
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques (Q6170037) (← links)
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization (Q6175706) (← links)
- Riemannian Natural Gradient Methods (Q6189169) (← links)