Lower bounds for finding stationary points I

From MaRDI portal
Publication:2205972

DOI10.1007/s10107-019-01406-yzbMath1451.90128arXiv1710.11606OpenAlexW2964000362MaRDI QIDQ2205972

Yair Carmon, Oliver Hinder, Aaron Sidford, John C. Duchi

Publication date: 21 October 2020

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1710.11606




Related Items

Convergence guarantees for a class of non-convex and non-smooth optimization problemsMathematical foundations of machine learning. Abstracts from the workshop held March 21--27, 2021 (hybrid meeting)A cubic regularization of Newton's method with finite difference Hessian approximationsOn lower iteration complexity bounds for the convex concave saddle point problemsPotential Function-Based Framework for Minimizing Gradients in Convex and Min-Max OptimizationThe exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functionsLower bounds for non-convex stochastic optimizationAn accelerated first-order method for non-convex optimization on manifoldsConditions for linear convergence of the gradient method for non-convex optimizationA Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity GuaranteesHyperfast second-order local solvers for efficient statistically preconditioned distributed optimizationRecent Theoretical Advances in Non-Convex OptimizationLower complexity bounds of first-order methods for convex-concave bilinear saddle-point problemsLower bounds for finding stationary points II: first-order methodsImplementable tensor methods in unconstrained convex optimizationEfficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max ProblemsOptimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functionsSome worst-case datasets of deterministic first-order methods for solving binary logistic regressionAdaptive regularization with cubics on manifoldsNear-Optimal Hyperfast Second-Order Method for Convex OptimizationAn adaptive high order method for finding third-order critical points of nonconvex optimizationPerturbed iterate SGD for Lipschitz continuous loss functionsGeneralized Momentum-Based Methods: A Hamiltonian PerspectiveA Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex OptimizationA hybrid stochastic optimization framework for composite nonconvex optimization


Uses Software


Cites Work


This page was built for publication: Lower bounds for finding stationary points I