Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
From MaRDI portal
Publication:2033403
DOI10.1007/S43069-021-00059-YzbMATH Open1470.90078OpenAlexW3181406714MaRDI QIDQ2033403FDOQ2033403
Publication date: 17 June 2021
Published in: SN Operations Research Forum (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s43069-021-00059-y
Recommendations
- Randomized smoothing for stochastic optimization
- Stochastic optimization algorithm with variance reduction for solving non-smooth problems
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- A smoothing stochastic gradient method for composite optimization
- A proximal stochastic gradient method with progressive variance reduction
variance reductionnon-smooth optimizationlinear convergencestochastic gradient descentrandomized smoothing
Cites Work
- Nonlinear total variation based noise removal algorithms
- Sparsity and Smoothness Via the Fused Lasso
- Smooth minimization of non-smooth functions
- Convergence rate of incremental subgradient algorithms
- An algorithm for total variation minimization and applications
- Linear convergence of epsilon-subgradient descent methods for a class of convex functions
- Dual averaging methods for regularized stochastic learning and online optimization
- Survey of Bundle Methods for Nonsmooth Optimization
- Generalization bounds for ranking algorithms via algorithmic stability
- Quasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Online Learning with Kernels
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- An approximate quasi-Newton bundle-type method for nonsmooth optimization
- Approximation analysis of gradient descent algorithm for bipartite ranking
- Randomized smoothing for stochastic optimization
- Minimizing finite sums with the stochastic average gradient
- Data-Driven Nonsmooth Optimization
- Stochastic Approximation for Risk-Aware Markov Decision Processes
- Fast proximal algorithms for nonsmooth convex optimization
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
Cited In (3)
This page was built for publication: Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2033403)