Gradient Convergence in Gradient methods with Errors

From MaRDI portal
Publication:4509729

DOI10.1137/S1052623497331063zbMath1049.90130MaRDI QIDQ4509729

Dimitri P. Bertsekas, John N. Tsitsiklis

Publication date: 19 October 2000

Published in: SIAM Journal on Optimization (Search for Journal in Brave)




Related Items

An incremental decomposition method for unconstrained optimization, Convergence of line search methods for unconstrained optimization, Stochastic forward-backward splitting for monotone inclusions, Steered sequential projections for the inconsistent convex feasibility problem, A variational inequality based stochastic approximation for estimating the flexural rigidity in random fourth-order models, A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks, Global optimization issues in deep network regression: an overview, A combined direction stochastic approximation algorithm, Descent direction method with line search for unconstrained optimization in noisy environment, An adaptive optimization scheme with satisfactory transient performance, A policy gradient method for semi-Markov decision processes with application to call admission control, A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions, Convergence analysis of contrastive divergence algorithm based on gradient method with errors, The computational asymptotics of Gaussian variational inference and the Laplace approximation, Block layer decomposition schemes for training deep neural networks, An iteratively regularized stochastic gradient method for estimating a random parameter in a stochastic PDE. A variational inequality approach, Simulation-based optimal sensor scheduling with application to observer trajectory planning, New stochastic approximation algorithms with adaptive step sizes, Zeroth-order optimization with orthogonal random directions, Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization, Stochastic momentum methods for non-convex learning without bounded assumptions, Distributed stochastic subgradient projection algorithms for convex optimization, Adaptive stochastic approximation algorithm, Optimal subgradient algorithms for large-scale convex optimization in simple domains, An online gradient-based parameter identification algorithm for the neuro-fuzzy systems, Online parameter estimation for the McKean-Vlasov stochastic differential equation, Distributed nonconvex constrained optimization over time-varying digraphs, A new hybrid stochastic approximation algorithm, Convergence analysis for sigma-pi-sigma neural network based on some relaxed conditions, On stochastic roundoff errors in gradient descent with low-precision computation, A Convergence Study of SGD-Type Methods for Stochastic Optimization, A new regularized stochastic approximation framework for stochastic inverse problems, Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality, Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions, Continuous‐time stochastic gradient descent for optimizing over the stationary distribution of stochastic differential equations, GANs training: A game and stochastic control approach, Two-timescale stochastic gradient descent in continuous time with applications to joint online parameter estimation and optimal sensor placement, Asymptotic bias of stochastic gradient search, Stochastic Gradient Descent in Continuous Time, On stochastic gradient and subgradient methods with adaptive steplength sequences, Random algorithms for convex minimization problems, Incremental proximal methods for large scale convex optimization, Unnamed Item, Timescale Separation in Recurrent Neural Networks, From structured data to evolution linear partial differential equations, String-averaging incremental stochastic subgradient algorithms, Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments, Fully asynchronous stochastic coordinate descent: a tight lower bound on the parallelism achieving linear speedup, A variational inequality based stochastic approximation for inverse problems in stochastic partial differential equations, Online drift estimation for jump-diffusion processes, An incremental subgradient method on Riemannian manifolds, DGM: a deep learning algorithm for solving partial differential equations, Convergence property of gradient-type methods with non-monotone line search in the presence of perturbations, The Malliavin gradient method for the calibration of stochastic dynamical models, Convergence and convergence rate of stochastic gradient search in the case of multiple and non-isolated extrema, Convergence of stochastic proximal gradient algorithm, Robust inversion, dimensionality reduction, and randomized sampling, Derivation and analysis of parallel-in-time neural ordinary differential equations, Stochastic approximation algorithms: overview and recent trends., Boundedness and convergence analysis of weight elimination for cyclic training of neural networks, Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants, Bounds for the tracking error of first-order online optimization methods, A study on distributed optimization over large-scale networked systems, Incremental without replacement sampling in nonconvex optimization, A stochastic gradient type algorithm for closed-loop problems, On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming, A persistent adjoint method with dynamic time-scaling and an application to mass action kinetics, Projected Stochastic Gradients for Convex Constrained Problems in Hilbert Spaces, Convergence Rate of Incremental Gradient and Incremental Newton Methods, On perturbed steepest descent methods with inexact line search for bilevel convex optimization, A regularized stochastic subgradient projection method for an optimal control problem in a stochastic partial differential equation, Adaptive error control during gradient search for an elliptic optimization problem, On the resolution of misspecified convex optimization and monotone variational inequality problems, Multimodal correlations-based data clustering, SABRINA: a stochastic subspace majorization-minimization algorithm, On the convergence of a block-coordinate incremental gradient method, New combinatorial direction stochastic approximation algorithms, Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity, Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming, Distributed Bregman-Distance Algorithms for Min-Max Optimization, From inexact optimization to learning via gradient concentration, Global convergence of the Dai-Yuan conjugate gradient method with perturbations