scientific article; zbMATH DE number 3313108
From MaRDI portal
Publication:5593503
zbMath0196.47701MaRDI QIDQ5593503
Publication date: 1963
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes, Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation, Understanding generalization error of SGD in nonconvex optimization, Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, Sparse optimization on measures with over-parameterized gradient descent, Unnamed Item, Approximating of unstable cycles in nonlinear autonomous systems, Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints, Computational and approximate methods of optimal control, Stepsize analysis for descent methods, Rates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spaces, Riemannian optimization via Frank-Wolfe methods, Global convergence of the gradient method for functions definable in o-minimal structures, Stochastic momentum methods for non-convex learning without bounded assumptions, A quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradients, Variance reduction on general adaptive stochastic mirror descent, Conditions for linear convergence of the gradient method for non-convex optimization, Sufficient conditions for the linear convergence of an algorithm for finding the metric projection of a point onto a convex compact set, Riemannian Hamiltonian Methods for Min-Max Optimization on Manifolds, Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers, Byzantine-robust loopless stochastic variance-reduced gradient, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, An adaptive Riemannian gradient method without function evaluations, Convergence rates analysis of a multiobjective proximal gradient method, Worst-case evaluation complexity of a derivative-free quadratic regularization method, Statistical Analysis of Fixed Mini-Batch Gradient Descent Estimator, Revisiting the approximate Carathéodory problem via the Frank-Wolfe algorithm, Proximal stochastic recursive momentum algorithm for nonsmooth nonconvex optimization problems, A simple nearly optimal restart scheme for speeding up first-order methods, Approximate Matrix and Tensor Diagonalization by Unitary Transformations: Convergence of Jacobi-Type Algorithms, Recent Theoretical Advances in Non-Convex Optimization, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, Non-monotone Behavior of the Heavy Ball Method, A linearly convergent stochastic recursive gradient method for convex optimization, Randomized Extended Average Block Kaczmarz for Solving Least Squares, The gradient projection algorithm for a proximally smooth set and a function with Lipschitz continuous gradient, Algorithm model for penalty functions-type iterative pcedures, Cubic regularization of Newton method and its global performance, Convexity, monotonicity, and gradient processes in Hilbert space, Multiplier methods: A survey, A canonical structure for iterative procedures, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, Regional complexity analysis of algorithms for nonconvex smooth optimization, Solution of location problems with radial cost functions, Conditions de convergence pour les algorithmes itératifs monotones, autonomes et non déterministes, Conditional gradient algorithms with open loop step size rules, Is there an analog of Nesterov acceleration for gradient-based MCMC?, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Sharpness, Restart, and Acceleration, The gradient projection algorithm for smooth sets and functions in nonconvex case, Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Regularized Newton method for unconstrained convex optimization, Gradient Projection and Conditional Gradient Methods for Constrained Nonconvex Minimization, An FFT-based fast gradient method for elastic and inelastic unit cell homogenization problems, Distributed optimization for degenerate loss functions arising from over-parameterization, Influence of geometric properties of space on the convergence of Cauchy's method in the best-approximation problem, A projection method for least-squares solutions to overdetermined systems of linear inequalities, Investigation of the total error in minimization of functionals with constraints, A piecewise conservative method for unconstrained convex optimization, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization, Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction, Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition, Learning over No-Preferred and Preferred Sequence of Items for Robust Recommendation, An overall study of convergence conditions for algorithms in nonlinear programming, Avoiding bad steps in Frank-Wolfe variants, Proximal Gradient Methods for Machine Learning and Imaging, Adaptive Gauss-Newton method for solving systems of nonlinear equations