scientific article; zbMATH DE number 7306860
From MaRDI portal
Publication:5148937
Andrei Kulunchakov, Julien Mairal
Publication date: 5 February 2021
Full work available at URL: https://arxiv.org/abs/1901.08788
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items
Unnamed Item ⋮ Adaptive proximal SGD based on new estimating sequences for sparser ERM ⋮ Recent theoretical advances in decentralized distributed convex optimization ⋮ Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates ⋮ Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems ⋮ Inexact model: a framework for optimization and variational inequalities
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- On the complexity analysis of randomized block-coordinate descent methods
- Minimizing finite sums with the stochastic average gradient
- Introductory lectures on convex optimization. A basic course.
- An optimal randomized incremental gradient method
- Cubic regularization of Newton method and its global performance
- Sparse Modeling for Image and Vision Processing
- On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
- Robust Stochastic Approximation Approach to Stochastic Programming
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Stability Selection
- Optimization Methods for Large-Scale Machine Learning
- Random Gradient Extrapolation for Distributed and Stochastic Optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Katyusha: the first direct acceleration of stochastic gradient methods
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Proximité et dualité dans un espace hilbertien
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization