Pages that link to "Item:Q5854329"
From MaRDI portal
The following pages link to Exact and inexact subsampled Newton methods for optimization (Q5854329):
Displaying 36 items.
- Sub-sampled Newton methods (Q1739039) (← links)
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization (Q2082285) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization (Q2149551) (← links)
- Inexact restoration with subsampled trust-region methods for finite-sum minimization (Q2191786) (← links)
- Generalized self-concordant functions: a recipe for Newton-type methods (Q2330645) (← links)
- Statistically equivalent surrogate material models: impact of random imperfections on the elasto-plastic response (Q2679290) (← links)
- Adversarial classification via distributional robustness with Wasserstein ambiguity (Q2693647) (← links)
- Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions (Q2693789) (← links)
- (Q4633055) (← links)
- (Q4637040) (← links)
- (Q4969259) (← links)
- A robust multi-batch L-BFGS method for machine learning (Q4972551) (← links)
- (Q5038021) (← links)
- A fully stochastic second-order trust region method (Q5043844) (← links)
- Quasi-Newton methods for machine learning: forget the past, just sample (Q5058389) (← links)
- Sketched Newton--Raphson (Q5093644) (← links)
- slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks (Q5095499) (← links)
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization (Q5131958) (← links)
- An investigation of Newton-Sketch and subsampled Newton methods (Q5135249) (← links)
- Convergence of Newton-MR under Inexact Hessian Information (Q5148404) (← links)
- Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection (Q5162626) (← links)
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization (Q5244401) (← links)
- LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums (Q5879118) (← links)
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians (Q6038658) (← links)
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm (Q6047687) (← links)
- A trust region method for noisy unconstrained optimization (Q6052069) (← links)
- An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints (Q6072951) (← links)
- SVRG meets AdaGrad: painless variance reduction (Q6097116) (← links)
- An overview of stochastic quasi-Newton methods for large-scale machine learning (Q6097379) (← links)
- On maximum residual nonlinear Kaczmarz-type algorithms for large nonlinear systems of equations (Q6100585) (← links)
- Newton-MR: inexact Newton method with minimum residual sub-problem solver (Q6114941) (← links)
- Generalized linear models for massive data via doubly-sketching (Q6117016) (← links)
- Hessian averaging in stochastic Newton methods achieves superlinear convergence (Q6165593) (← links)
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization (Q6175706) (← links)
- On pseudoinverse-free block maximum residual nonlinear Kaczmarz method for solving large-scale nonlinear system of equations (Q6179938) (← links)