Optimal Algorithms for Stochastic Complementary Composite Minimization
From MaRDI portal
Publication:6136660
DOI10.1137/22M1530884arXiv2211.01758OpenAlexW4390608808MaRDI QIDQ6136660FDOQ6136660
Authors: Alexandre d'Aspremont, Cristóbal Guzmán
Publication date: 17 January 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Abstract: Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting. This problem corresponds to the minimization of the sum of a (weakly) smooth function endowed with a stochastic first-order oracle, and a structured uniformly convex (possibly nonsmooth and non-Lipschitz) regularization term. Despite intensive work on closely related settings, prior to our work no complexity bounds for this problem were known. We close this gap by providing novel excess risk bounds, both in expectation and with high probability. Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems. We conclude by providing numerical results comparing our methods to the state of the art.
Full work available at URL: https://arxiv.org/abs/2211.01758
regularizationaccelerated first-order methodsstochastic convex optimizationnon-Euclidean composite minimization
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- First-order methods in optimization
- High-dimensional statistics. A non-asymptotic viewpoint
- Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- Title not available (Why is that?)
- A Statistical View of Some Chemometrics Regression Tools
- Understanding machine learning. From theory to algorithms
- First-order methods of smooth convex optimization with inexact oracle
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- An optimal method for stochastic composite optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- On first-order algorithms for \(\ell_{1}/\)nuclear norm minimization
- Sharp uniform convexity and smoothness inequalities for trace norms
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Universal gradient methods for convex optimization problems
- Title not available (Why is that?)
- On uniformly convex functions
- Sparsity in penalized empirical risk minimization
- Regularized learning schemes in feature Banach spaces
- On norms in some class of exponential type Orlicz spaces of random variables
- Foundations of machine learning
- First-order and stochastic optimization methods for machine learning
- An homotopy method for l p regression provably beyond self-concordance and in input-sparsity time
- Iterative refinement for \(\ell_p\)-norm regression
- Optimal Affine-Invariant Smooth Minimization Algorithms
This page was built for publication: Optimal Algorithms for Stochastic Complementary Composite Minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136660)