Pages that link to "Item:Q4509729"
From MaRDI portal
The following pages link to Gradient Convergence in Gradient methods with Errors (Q4509729):
Displaying 50 items.
- Global convergence of the Dai-Yuan conjugate gradient method with perturbations (Q263134) (← links)
- An incremental decomposition method for unconstrained optimization (Q272371) (← links)
- Stochastic forward-backward splitting for monotone inclusions (Q289110) (← links)
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks (Q301668) (← links)
- On stochastic gradient and subgradient methods with adaptive steplength sequences (Q445032) (← links)
- Distributed stochastic subgradient projection algorithms for convex optimization (Q620442) (← links)
- Random algorithms for convex minimization problems (Q644912) (← links)
- Incremental proximal methods for large scale convex optimization (Q644913) (← links)
- Robust inversion, dimensionality reduction, and randomized sampling (Q715245) (← links)
- On the resolution of misspecified convex optimization and monotone variational inequality problems (Q782913) (← links)
- A combined direction stochastic approximation algorithm (Q845559) (← links)
- A policy gradient method for semi-Markov decision processes with application to call admission control (Q859693) (← links)
- Simulation-based optimal sensor scheduling with application to observer trajectory planning (Q883376) (← links)
- A stochastic gradient type algorithm for closed-loop problems (Q1013967) (← links)
- Convergence analysis of contrastive divergence algorithm based on gradient method with errors (Q1665421) (← links)
- Adaptive stochastic approximation algorithm (Q1689446) (← links)
- Optimal subgradient algorithms for large-scale convex optimization in simple domains (Q1689457) (← links)
- Asymptotic bias of stochastic gradient search (Q1704136) (← links)
- An incremental subgradient method on Riemannian manifolds (Q1752648) (← links)
- Convergence of line search methods for unconstrained optimization (Q1881700) (← links)
- Steered sequential projections for the inconsistent convex feasibility problem (Q1888607) (← links)
- New stochastic approximation algorithms with adaptive step sizes (Q1926628) (← links)
- A new hybrid stochastic approximation algorithm (Q1941202) (← links)
- A variational inequality based stochastic approximation for inverse problems in stochastic partial differential equations (Q1982216) (← links)
- Online drift estimation for jump-diffusion processes (Q1983620) (← links)
- DGM: a deep learning algorithm for solving partial differential equations (Q2002333) (← links)
- Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema (Q2018557) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Derivation and analysis of parallel-in-time neural ordinary differential equations (Q2023867) (← links)
- Bounds for the tracking error of first-order online optimization methods (Q2032000) (← links)
- A study on distributed optimization over large-scale networked systems (Q2036031) (← links)
- Incremental without replacement sampling in nonconvex optimization (Q2046568) (← links)
- A persistent adjoint method with dynamic time-scaling and an application to mass action kinetics (Q2066190) (← links)
- A regularized stochastic subgradient projection method for an optimal control problem in a stochastic partial differential equation (Q2080615) (← links)
- Multimodal correlations-based data clustering (Q2087416) (← links)
- SABRINA: a stochastic subspace majorization-minimization algorithm (Q2095568) (← links)
- On the convergence of a block-coordinate incremental gradient method (Q2100401) (← links)
- From inexact optimization to learning via gradient concentration (Q2111477) (← links)
- A variational inequality based stochastic approximation for estimating the flexural rigidity in random fourth-order models (Q2137330) (← links)
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions (Q2167333) (← links)
- The computational asymptotics of Gaussian variational inference and the Laplace approximation (Q2172111) (← links)
- Block layer decomposition schemes for training deep neural networks (Q2173515) (← links)
- From structured data to evolution linear partial differential equations (Q2222252) (← links)
- Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments (Q2234294) (← links)
- Fully asynchronous stochastic coordinate descent: a tight lower bound on the parallelism achieving linear speedup (Q2235160) (← links)
- Boundedness and convergence analysis of weight elimination for cyclic training of neural networks (Q2281677) (← links)
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants (Q2282819) (← links)
- An adaptive optimization scheme with satisfactory transient performance (Q2390563) (← links)
- Distributed nonconvex constrained optimization over time-varying digraphs (Q2425183) (← links)
- Convergence property of gradient-type methods with non-monotone line search in the presence of perturbations (Q2489332) (← links)