A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
From MaRDI portal
Publication:6051310
DOI10.1007/s10589-023-00504-0zbMath1522.90031MaRDI QIDQ6051310
Jin Zhang, Hai-An Yin, Zhen-Ping Yang, Gui-Hua Lin
Publication date: 19 October 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An optimal method for stochastic composite optimization
- Handbook of simulation optimization
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Convergence of stochastic proximal gradient algorithm
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems
- First-order and stochastic optimization methods for machine learning
- Stochastic First-Order Methods with Random Constraint Projection
- Proximal Splitting Methods in Signal Processing
- Monotone Operators and the Proximal Point Algorithm
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Variational Analysis
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- On Efficiently Solving the Subproblems of a Level-Set Method for Fused Lasso Problems
- A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Solving the OSCAR and SLOPE Models Using a Semismooth Newton-Based Augmented Lagrangian Method
- The importance of better models in stochastic optimization
- Efficient Sparse Semismooth Newton Methods for the Clustered Lasso Problem
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Regularization and Variable Selection Via the Elastic Net
- On perturbed proximal gradient algorithms
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Understanding Machine Learning
- A Stochastic Approximation Method
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization