High probability bounds on AdaGrad for constrained weakly convex optimization
From MaRDI portal
Publication:6649705
DOI10.1016/J.JCO.2024.101889MaRDI QIDQ6649705FDOQ6649705
Authors: Yusu Hong, Junhong Lin
Publication date: 6 December 2024
Published in: Journal of Complexity (Search for Journal in Brave)
Recommendations
- scientific article; zbMATH DE number 7306906
- Sequential convergence of AdaGrad algorithm for smooth convex optimization
- High-probability complexity bounds for non-smooth stochastic convex optimization with heavy-tailed noise
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Convergence analysis of AdaBound with relaxed bound functions for non-convex optimization
Artificial intelligence (68Txx) Mathematical programming (90Cxx) Numerical methods for mathematical programming, optimization and variational techniques (65Kxx)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Robust principal component analysis?
- A Stochastic Approximation Method
- Robust Stochastic Approximation Approach to Stochastic Programming
- Convex Analysis
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- AN OLD‐NEW CONCEPT OF CONVEX RISK MEASURES: THE OPTIMIZED CERTAINTY EQUIVALENT
- On the Generalization Ability of On-Line Learning Algorithms
- Stochastic generalized gradient method for nonconvex nonsmooth stochastic optimization
- Online Regularized Classification Algorithms
- Expected Utility, Penalty Functions, and Duality in Stochastic Nonlinear Programming
- A Linearization Method for Nonsmooth Stochastic Programming Problems
- Lectures on convex optimization
- Optimization methods for large-scale machine learning
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Modified Fejér sequences and applications
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- Stochastic model-based minimization of weakly convex functions
- Efficiency of minimizing compositions of convex functions and smooth maps
- Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems
- Title not available (Why is that?)
- The nonsmooth landscape of phase retrieval
- Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints
- Convergence of online mirror descent
- Lower bounds for non-convex stochastic optimization
- Capacity dependent analysis for functional online learning algorithms
Cited In (1)
This page was built for publication: High probability bounds on AdaGrad for constrained weakly convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6649705)