Finding approximate local minima faster than gradient descent

From MaRDI portal
Revision as of 09:29, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4978058

DOI10.1145/3055399.3055464zbMath1369.68290arXiv1611.01146OpenAlexW2609037894MaRDI QIDQ4978058

Tengyu Ma, Naman Agarwal, Brian Bullins, Zeyuan Allen Zhu, Elad Hazan

Publication date: 17 August 2017

Published in: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1611.01146




Related Items (40)

Adaptive Quadratically Regularized Newton Method for Riemannian OptimizationUnnamed ItemUnnamed ItemUnnamed ItemAccelerated Methods for NonConvex OptimizationEscaping Strict Saddle Points of the Moreau Envelope in Nonsmooth OptimizationCubic regularization methods with second-order complexity guarantee based on a new subproblem reformulationApproximating the nearest stable discrete-time systemUnnamed ItemA Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity GuaranteesA Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity GuaranteesFirst-Order Methods for Nonconvex Quadratic MinimizationOptimizing mean field spin glasses with external fieldNewton-type methods for non-convex optimization under inexact Hessian informationLower bounds for finding stationary points IA Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle PointsSecond-Order Guarantees of Distributed Gradient AlgorithmsRecent Theoretical Advances in Non-Convex OptimizationLower bounds for finding stationary points II: first-order methodsSolving Large-Scale Cubic Regularization by a Generalized Eigenvalue ProblemComplexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex OptimizationThe global optimization geometry of shallow linear neural networksAugmented Lagrangians with constrained subproblems and convergence to second-order stationary pointsUnnamed ItemMinimizing uniformly convex functions by cubic regularization of Newton methodA decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity boundsAdaptive regularization with cubics on manifoldsA concise second-order complexity analysis for unconstrained optimization using high-order regularized modelsAn accelerated first-order method with complexity analysis for solving cubic regularization subproblemsA Newton-CG algorithm with complexity guarantees for smooth unconstrained optimizationCombining stochastic adaptive cubic regularization with negative curvature for nonconvex optimizationProvable accelerated gradient method for nonconvex low rank optimizationUnnamed ItemGradient Descent Finds the Cubic-Regularized Nonconvex Newton StepUnnamed ItemTrust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex OptimizationStochastic proximal linear method for structured non-convex problemsUnnamed ItemOne-dimensional system arising in stochastic gradient descentHigher-Order Methods for Convex-Concave Min-Max Optimization and Monotone Variational Inequalities







This page was built for publication: Finding approximate local minima faster than gradient descent