Convergence rates of gradient methods for convex optimization in the space of measures
From MaRDI portal
Publication:6114893
DOI10.5802/ojmo.20zbMath1530.90073arXiv2105.08368OpenAlexW3161664689MaRDI QIDQ6114893
Publication date: 12 July 2023
Published in: OJMO. Open Journal of Mathematical Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2105.08368
Numerical mathematical programming methods (65K05) Convex programming (90C25) Programming in abstract spaces (90C48) Methods of reduced gradient type (90C52)
Related Items (1)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Fast projection onto the simplex and the \(l_1\) ball
- Nonparametric stochastic approximation with large step-sizes
- Exact reconstruction using Beurling minimal extrapolation
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- The Moreau envelope function and proximal mapping in the sense of the Bregman distance
- Exponentiated gradient versus gradient descent for linear predictors
- Linear convergence of iterative soft-thresholding
- Riemannian geometry as determined by the volumes of small geodesic balls
- Integrals which are convex functionals. II
- On early stopping in gradient descent learning
- The geometry of off-the-grid compressed sensing
- Global Optimization with Polynomials and the Problem of Moments
- Mean-value theorems for Riemannian manifolds
- Bregman Monotone Optimization Algorithms
- ESSENTIAL SMOOTHNESS, ESSENTIAL STRICT CONVEXITY, AND LEGENDRE FUNCTIONS IN BANACH SPACES
- Inverse problems in spaces of measures
- A mean field view of the landscape of two-layer neural networks
- Particle dual averaging: optimization of mean field neural network with global convergence rate analysis*
- Thresholding gradient methods in Hilbert spaces: support identification and linear convergence
- The sliding Frank–Wolfe algorithm and its application to super-resolution microscopy
- Solving Large-Scale Optimization Problems with a Convergence Rate Independent of Grid Size
- Breaking the Curse of Dimensionality with Convex Neural Networks
- Towards a Mathematical Theory of Super‐resolution
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems
- ``FISTA in Banach spaces with adaptive discretisations
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Convergence rates of gradient methods for convex optimization in the space of measures