Incremental proximal methods for large scale convex optimization
DOI10.1007/S10107-011-0472-0zbMATH Open1229.90121OpenAlexW2073750241MaRDI QIDQ644913FDOQ644913
Authors: Dimitri P. Bertsekas
Publication date: 7 November 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-011-0472-0
Recommendations
- Incremental subgradient methods for nondifferentiable optimization
- An optimal randomized incremental gradient method
- Convergence rate of incremental subgradient algorithms
- A proximal stochastic gradient method with progressive variance reduction
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Applications of mathematical programming (90C90)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Pegasos: primal estimated sub-gradient solver for SVM
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Convex optimization theory.
- Convergence rate of incremental subgradient algorithms
- The ordered subsets mirror descent optimization method with applications to tomography
- Convex Analysis
- Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Signal Recovery by Proximal Forward-Backward Splitting
- Title not available (Why is that?)
- An EM algorithm for wavelet-based image restoration
- Monotone Operators and the Proximal Point Algorithm
- Title not available (Why is that?)
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Incremental gradient algorithms with stepsizes bounded away from zero
- Title not available (Why is that?)
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Gradient Convergence in Gradient methods with Errors
- A Convergent Incremental Gradient Method with a Constant Step Size
- The method of projections for finding the common point of convex sets
- Title not available (Why is that?)
- Gradient-based algorithms with applications to signal-recovery problems
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage
- Decomposition into functions in the minimization problem
- Incremental subgradient methods for nondifferentiable optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Incremental Least Squares Methods and the Extended Kalman Filter
- The effect of deterministic noise in subgradient methods
- A New Class of Incremental Gradient Methods for Least Squares Problems
- Error stability properties of generalized gradient-type algorithms
- Extrapolation algorithm for affine-convex feasibility problems
- Relaxed Alternating Projection Methods
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- A Collaborative Training Algorithm for Distributed Learning
- Distributed asynchronous incremental subgradient methods
- Incremental stochastic subgradient algorithms for convex optimization
- Projection algorithms: Results and open problems
- An iteration method in the problem of approximating functions from a finite number of observations
- A Fast Multilevel Algorithm for Wavelet-Regularized Image Restoration
Cited In (only showing first 100 items - show all)
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Decentralized hierarchical constrained convex optimization
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- Forward-reflected-backward method with variance reduction
- Consensus-based distributed optimisation of multi-agent networks via a two level subgradient-proximal algorithm
- Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression
- Two proximal splitting methods in Hadamard spaces
- A scaled incremental gradient method
- Splitting proximal with penalization schemes for additive convex hierarchical minimization problems
- Bregman methods for large-scale optimization with applications in imaging
- Old and new challenges in Hadamard spaces
- An inexact semismooth Newton method on Riemannian manifolds with application to duality-based total variation denoising
- On the convergence of stochastic primal-dual hybrid gradient
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- Statistical performance of quantile tensor regression with convex regularization
- Product of resolvents on Hadamard manifolds
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- Wavelet Sparse Regularization for Manifold-Valued Data
- Variance reduction techniques for stochastic proximal point algorithms
- Decentralized proximal splitting algorithms for composite constrained convex optimization
- Circuit analysis using monotone+skew splitting
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- A semismooth Newton stochastic proximal point algorithm with variance reduction
- Efficient algorithms for implementing incremental proximal-point methods
- The stochastic proximal distance algorithm
- Primal-dual algorithms for multi-agent structured optimization over message-passing architectures with bounded communication delays
- Accelerated proximal incremental algorithm schemes for non-strongly convex functions
- Incremental gradient-free method for nonsmooth distributed optimization
- Linear convergence of cyclic SAGA
- Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Bregman Finito/MISO for nonconvex regularized finite sum minimization without Lipschitz gradient continuity
- Subgradient algorithms on Riemannian manifolds of lower bounded curvatures
- Convergence analysis of incremental and parallel line search subgradient methods in Hilbert space
- An asynchronous bundle-trust-region method for dual decomposition of stochastic mixed-integer programming
- Path-based incremental target level algorithm on Riemannian manifolds
- An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems
- Proximal-proximal-gradient method
- Proximal Newton-type methods for minimizing composite functions
- The Averaged Kaczmarz Iteration for Solving Inverse Problems
- Communication-efficient algorithms for decentralized and stochastic optimization
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- An incremental decomposition method for unconstrained optimization
- Estimation of a sparse and spiked covariance matrix
- First-order methods for convex optimization
- Analysis of stochastic gradient descent in continuous time
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Parametric and semiparametric reduced-rank regression with flexible sparsity
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A coordinate-descent primal-dual algorithm with large step size and possibly nonseparable functions
- Rank reduction for high-dimensional generalized additive models
- A globally convergent incremental Newton method
- DC programming and DCA: thirty years of developments
- A smooth inexact penalty reformulation of convex problems with linear constraints
- A second-order TV-type approach for inpainting and denoising higher dimensional combined cyclic and vector space data
- An optimal randomized incremental gradient method
- Distributed proximal-gradient method for convex optimization with inequality constraints
- Limited-angle CT reconstruction with generalized shrinkage operators as regularizers
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Dual averaging with adaptive random projection for solving evolving distributed optimization problems
- AIR tools II: algebraic iterative reconstruction methods, improved implementation
- Convergence rate of incremental subgradient algorithms
- Stochastic gradient methods for \(L^2\)-Wasserstein least squares problem of Gaussian measures
- Relative Optimality Conditions and Algorithms for Treespace Fréchet Means
- Incremental Constraint Projection Methods for Monotone Stochastic Variational Inequalities
- Gradient-free method for nonsmooth distributed optimization
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Combined first and second order variational approaches for image processing
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Stochastic first-order methods with random constraint projection
- Hybrid deterministic-stochastic methods for data fitting
- Total generalized variation for manifold-valued data
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Proximal gradient methods with adaptive subspace sampling
- Generalized row-action methods for tomographic imaging
- Discrete-time gradient flows and law of large numbers in Alexandrov spaces
- Random algorithms for convex minimization problems
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Ergodic convergence of a stochastic proximal point algorithm
- Global convergence rate of proximal incremental aggregated gradient methods
- Hierarchical MPC schemes for periodic systems using stochastic programming
- Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions
- Incremental without replacement sampling in nonconvex optimization
- Inexact proximal stochastic gradient method for convex composite optimization
- Sublinear convergence of a tamed stochastic gradient descent method in Hilbert space
- Non-smooth variational regularization for processing manifold-valued data
- Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Modified Fejér sequences and applications
- Distributed optimization with information-constrained population dynamics
- Dual decomposition for multi-agent distributed optimization with coupling constraints
- Manifold-valued data in medical imaging applications
- Nonlinear functional canonical correlation analysis via distance covariance
- Forward-Backward-Half Forward Algorithm for Solving Monotone Inclusions
Uses Software
This page was built for publication: Incremental proximal methods for large scale convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q644913)