scientific article; zbMATH DE number 3313108

From MaRDI portal
Revision as of 03:51, 7 March 2024 by Import240305080351 (talk | contribs) (Created automatically from import240305080351)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5593503

zbMath0196.47701MaRDI QIDQ5593503

Boris T. Polyak

Publication date: 1963


Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items (69)

Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizesFit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationUnderstanding generalization error of SGD in nonconvex optimizationLoss landscapes and optimization in over-parameterized non-linear systems and neural networksSparse optimization on measures with over-parameterized gradient descentUnnamed ItemApproximating of unstable cycles in nonlinear autonomous systemsProximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraintsComputational and approximate methods of optimal controlStepsize analysis for descent methodsRates of convergence for inexact Krasnosel'skii-Mann iterations in Banach spacesRiemannian optimization via Frank-Wolfe methodsGlobal convergence of the gradient method for functions definable in o-minimal structuresStochastic momentum methods for non-convex learning without bounded assumptionsA quasi-Newton trust-region method for optimization under uncertainty using stochastic simplex approximate gradientsVariance reduction on general adaptive stochastic mirror descentConditions for linear convergence of the gradient method for non-convex optimizationSufficient conditions for the linear convergence of an algorithm for finding the metric projection of a point onto a convex compact setRiemannian Hamiltonian Methods for Min-Max Optimization on ManifoldsRun-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizersByzantine-robust loopless stochastic variance-reduced gradientConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometryAn adaptive Riemannian gradient method without function evaluationsConvergence rates analysis of a multiobjective proximal gradient methodWorst-case evaluation complexity of a derivative-free quadratic regularization methodStatistical Analysis of Fixed Mini-Batch Gradient Descent EstimatorRevisiting the approximate Carathéodory problem via the Frank-Wolfe algorithmProximal stochastic recursive momentum algorithm for nonsmooth nonconvex optimization problemsA simple nearly optimal restart scheme for speeding up first-order methodsApproximate Matrix and Tensor Diagonalization by Unitary Transformations: Convergence of Jacobi-Type AlgorithmsRecent Theoretical Advances in Non-Convex OptimizationNonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteriaNon-monotone Behavior of the Heavy Ball MethodA linearly convergent stochastic recursive gradient method for convex optimizationRandomized Extended Average Block Kaczmarz for Solving Least SquaresThe gradient projection algorithm for a proximally smooth set and a function with Lipschitz continuous gradientAlgorithm model for penalty functions-type iterative pceduresCubic regularization of Newton method and its global performanceConvexity, monotonicity, and gradient processes in Hilbert spaceMultiplier methods: A surveyA canonical structure for iterative proceduresConvergence rates of an inertial gradient descent algorithm under growth and flatness conditionsRegional complexity analysis of algorithms for nonconvex smooth optimizationSolution of location problems with radial cost functionsConditions de convergence pour les algorithmes itératifs monotones, autonomes et non déterministesConditional gradient algorithms with open loop step size rulesIs there an analog of Nesterov acceleration for gradient-based MCMC?A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational boundsSharpness, Restart, and AccelerationThe gradient projection algorithm for smooth sets and functions in nonconvex caseLevel-set subdifferential error bounds and linear convergence of Bregman proximal gradient methodNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsRegularized Newton method for unconstrained convex optimizationGradient Projection and Conditional Gradient Methods for Constrained Nonconvex MinimizationAn FFT-based fast gradient method for elastic and inelastic unit cell homogenization problemsDistributed optimization for degenerate loss functions arising from over-parameterizationInfluence of geometric properties of space on the convergence of Cauchy's method in the best-approximation problemA projection method for least-squares solutions to overdetermined systems of linear inequalitiesInvestigation of the total error in minimization of functionals with constraintsA piecewise conservative method for unconstrained convex optimizationOn linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuityLimited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimizationImplicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and PredictionConvergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz conditionLearning over No-Preferred and Preferred Sequence of Items for Robust RecommendationAn overall study of convergence conditions for algorithms in nonlinear programmingAvoiding bad steps in Frank-Wolfe variantsProximal Gradient Methods for Machine Learning and ImagingAdaptive Gauss-Newton method for solving systems of nonlinear equations







This page was built for publication: