Optimal Algorithms for Stochastic Complementary Composite Minimization
From MaRDI portal
Publication:6136660
Abstract: Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting. This problem corresponds to the minimization of the sum of a (weakly) smooth function endowed with a stochastic first-order oracle, and a structured uniformly convex (possibly nonsmooth and non-Lipschitz) regularization term. Despite intensive work on closely related settings, prior to our work no complexity bounds for this problem were known. We close this gap by providing novel excess risk bounds, both in expectation and with high probability. Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems. We conclude by providing numerical results comparing our methods to the state of the art.
Recommendations
- Complementary composite minimization, small gradients in general norms, and applications
- An optimal method for stochastic composite optimization
- On stochastic primal-dual hybrid gradient approach for compositely regularized minimization
- A smoothing stochastic gradient method for composite optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 1433619 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Statistical View of Some Chemometrics Regression Tools
- An homotopy method for l p regression provably beyond self-concordance and in input-sparsity time
- An optimal method for stochastic composite optimization
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- First-order and stochastic optimization methods for machine learning
- First-order methods in optimization
- First-order methods of smooth convex optimization with inexact oracle
- Foundations of machine learning
- Gradient methods for minimizing composite functions
- Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization
- High-dimensional statistics. A non-asymptotic viewpoint
- Iterative refinement for \(\ell_p\)-norm regression
- On first-order algorithms for \(\ell_{1}/\)nuclear norm minimization
- On norms in some class of exponential type Orlicz spaces of random variables
- On uniformly convex functions
- Optimal Affine-Invariant Smooth Minimization Algorithms
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Regularized learning schemes in feature Banach spaces
- Robust Stochastic Approximation Approach to Stochastic Programming
- Sharp uniform convexity and smoothness inequalities for trace norms
- Sparsity in penalized empirical risk minimization
- Understanding machine learning. From theory to algorithms
- Universal gradient methods for convex optimization problems
Cited in
(1)
This page was built for publication: Optimal Algorithms for Stochastic Complementary Composite Minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136660)