Pages that link to "Item:Q5215517"
From MaRDI portal
The following pages link to A Stochastic Line Search Method with Expected Complexity Analysis (Q5215517):
Displaying 46 items.
- A discussion on variational analysis in derivative-free optimization (Q829491) (← links)
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates (Q2028452) (← links)
- A stochastic subspace approach to gradient-free optimization in high dimensions (Q2044475) (← links)
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives (Q2052165) (← links)
- Expected complexity analysis of stochastic direct-search (Q2070336) (← links)
- Linesearch Newton-CG methods for convex optimization with noise (Q2084588) (← links)
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization (Q2111466) (← links)
- An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems (Q2112678) (← links)
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization (Q2143221) (← links)
- Parameter calibration in wake effect simulation model with stochastic gradient descent and stratified sampling (Q2170435) (← links)
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates (Q2687061) (← links)
- Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions (Q2693789) (← links)
- The impact of noise on evaluation complexity: the deterministic trust-region case (Q2696963) (← links)
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise (Q4997171) (← links)
- Optimization of Stochastic Blackboxes with Adaptive Precision (Q5020850) (← links)
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy (Q5034938) (← links)
- A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization (Q5076721) (← links)
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models (Q5079553) (← links)
- Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions (Q5081786) (← links)
- Analysis of the BFGS Method with Errors (Q5210518) (← links)
- Derivative-free optimization methods (Q5230522) (← links)
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization (Q5244400) (← links)
- Inexact SARAH algorithm for stochastic optimization (Q5859016) (← links)
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums (Q5879118) (← links)
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians (Q6038658) (← links)
- Zeroth-order optimization with orthogonal random directions (Q6038668) (← links)
- A stochastic gradient method for a class of nonlinear PDE-constrained optimal control problems under uncertainty (Q6041823) (← links)
- Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming (Q6052061) (← links)
- A trust region method for noisy unconstrained optimization (Q6052069) (← links)
- Direct Search Based on Probabilistic Descent in Reduced Spaces (Q6071887) (← links)
- A simplified convergence theory for Byzantine resilient stochastic gradient descent (Q6114944) (← links)
- Adaptive step size rules for stochastic optimization in large-scale learning (Q6116586) (← links)
- Stochastic regularized Newton methods for nonlinear equations (Q6158978) (← links)
- A line search based proximal stochastic gradient algorithm with dynamical variance reduction (Q6159404) (← links)
- Trust-region algorithms: probabilistic complexity and intrinsic noise with applications to subsampling techniques (Q6170037) (← links)
- Stochastic trust-region and direct-search methods: a weak tail bound condition and reduced sample sizing (Q6561380) (← links)
- High probability complexity bounds for adaptive step search based on stochastic oracles (Q6573017) (← links)
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses (Q6580002) (← links)
- A stochastic gradient method with variance control and variable learning rate for deep learning (Q6582036) (← links)
- Subsampled first-order optimization methods with applications in imaging (Q6606441) (← links)
- First- and second-order high probability complexity bounds for trust-region methods with noisy oracles (Q6608030) (← links)
- Bolstering stochastic gradient descent with model building (Q6635852) (← links)
- AN-SPS: adaptive sample size nonmonotone line search spectral projected subgradient method for convex constrained optimization problems (Q6644996) (← links)
- A sequential quadratic programming method with high-probability complexity bounds for nonlinear equality-constrained stochastic optimization (Q6663117) (← links)
- SketchySGD: reliable stochastic optimization via randomized curvature estimates (Q6664471) (← links)
- Sample complexity analysis for adaptive optimization algorithms with stochastic oracles (Q6665393) (← links)