Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
DOI10.1007/S10994-016-5609-1zbMATH Open1459.62156OpenAlexW2550590730MaRDI QIDQ2398094FDOQ2398094
Authors: Yiu-ming Cheung, Jian Lou
Publication date: 15 August 2017
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-016-5609-1
Recommendations
- Accelerated dual-averaging primal–dual method for composite convex minimization
- Distributed block-diagonal approximation methods for regularized empirical risk minimization
- Adaptive linearized alternating direction method of multipliers for non-convex compositely regularized optimization problems
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- An adaptive superfast inexact proximal augmented Lagrangian method for smooth nonconvex composite optimization problems
Learning and adaptive systems in artificial intelligence (68T05) Stochastic approximation (62L20) Methods of reduced gradient type (90C52)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Nearly unbiased variable selection under minimax concave penalty
- Distributed optimization and statistical learning via the alternating direction method of multipliers
- Analysis of multi-stage convex relaxation for sparse regularization
- Introductory lectures on convex optimization. A basic course.
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Grouping Pursuit Through a Regularization Solution Surface
- Dual averaging methods for regularized stochastic learning and online optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Semi-stochastic coordinate descent
- The Proximal Average: Basic Theory
- Large-Scale Machine Learning with Stochastic Gradient Descent
Cited In (5)
- Selective linearization for multi-block statistical learning
- Fixed point quasiconvex subgradient method
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning
- Integral resolvent and proximal mixtures
- An Efficient Algorithm for Minimizing Multi Non-Smooth Component Functions
Uses Software
This page was built for publication: Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2398094)