A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
From MaRDI portal
Publication:5860367
Abstract: In this paper we combine the stochastic variance reduced gradient (SVRG) method [17] with the primal dual fixed point method (PDFP) proposed in [7] to solve a sum of two convex functions and one of which is linearly composite. This type of problems are typically arisen in sparse signal and image reconstruction. The proposed SVRG-PDFP can be seen as a generalization of Prox-SVRG [37] originally designed for the minimization of a sum of two convex functions. Based on some standard assumptions, we propose two variants, one is for strongly convex objective function and the other is for general convex cases. Convergence analysis shows that the convergence rate of SVRG-PDFP is O(1/k) (here k is the iteration number) for general convex objective function and linear for k strongly convex case. Numerical examples on machine learning and CT image reconstruction are provided to show the effectiveness of the algorithms.
Recommendations
- Stochastic primal dual fixed point method for composite optimization
- An analysis of stochastic variance reduced gradient for linear inverse problems *
- A proximal stochastic gradient method with progressive variance reduction
- A primal-dual fixed point algorithm for minimization of the sum of three convex separable functions
- Efficient primal-dual fixed point algorithms with dynamic stepsize for composite convex optimization problems
Cites work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science
- A generalized forward-backward splitting
- A primal–dual fixed point algorithm for convex separable minimization with applications to image restoration
- A proximal stochastic gradient method with progressive variance reduction
- Convergence of stochastic proximal gradient algorithm
- Convex analysis and monotone operator theory in Hilbert spaces
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Efficient online and batch learning using forward backward splitting
- New Proximal Point Algorithms for Convex Minimization
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- On the \(O(1/n)\) convergence rate of the Douglas-Rachford alternating direction method
- On the ergodic convergence rates of a first-order primal-dual algorithm
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Optimal primal-dual methods for a class of saddle point problems
- Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators
- Proximity algorithms for image models: denoising
- Signal Recovery by Proximal Forward-Backward Splitting
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Stochastic primal dual fixed point method for composite optimization
Cited in
(5)- Federated primal dual fixed point algorithm
- A stochastic primal-dual method for a class of nonconvex constrained optimization
- Stochastic primal dual fixed point method for composite optimization
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Improved SVRG for finite sum structure optimization with application to binary classification
This page was built for publication: A stochastic variance reduced primal dual fixed point method for linearly constrained separable optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5860367)