Incremental proximal methods for large scale convex optimization
From MaRDI portal
Publication:644913
Recommendations
- Incremental subgradient methods for nondifferentiable optimization
- An optimal randomized incremental gradient method
- Convergence rate of incremental subgradient algorithms
- A proximal stochastic gradient method with progressive variance reduction
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
Cites work
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 3538599 (Why is no real title available?)
- scientific article; zbMATH DE number 1321699 (Why is no real title available?)
- scientific article; zbMATH DE number 2121575 (Why is no real title available?)
- A Collaborative Training Algorithm for Distributed Learning
- A Convergent Incremental Gradient Method with a Constant Step Size
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Fast Multilevel Algorithm for Wavelet-Regularized Image Restoration
- A New Class of Incremental Gradient Methods for Least Squares Problems
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- An EM algorithm for wavelet-based image restoration
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- An iteration method in the problem of approximating functions from a finite number of observations
- An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Convergence rate of incremental subgradient algorithms
- Convex Analysis
- Convex optimization theory.
- Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Decomposition into functions in the minimization problem
- Distributed asynchronous incremental subgradient methods
- Distributed stochastic subgradient projection algorithms for convex optimization
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- Error stability properties of generalized gradient-type algorithms
- Extrapolation algorithm for affine-convex feasibility problems
- Gradient Convergence in Gradient methods with Errors
- Gradient-based algorithms with applications to signal-recovery problems
- Incremental Least Squares Methods and the Extended Kalman Filter
- Incremental gradient algorithms with stepsizes bounded away from zero
- Incremental stochastic subgradient algorithms for convex optimization
- Incremental subgradient methods for nondifferentiable optimization
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- Monotone Operators and the Proximal Point Algorithm
- Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage
- Pegasos: primal estimated sub-gradient solver for SVM
- Projection algorithms: Results and open problems
- Relaxed Alternating Projection Methods
- Signal Recovery by Proximal Forward-Backward Splitting
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- The effect of deterministic noise in subgradient methods
- The method of projections for finding the common point of convex sets
- The ordered subsets mirror descent optimization method with applications to tomography
Cited in
(only showing first 100 items - show all)- Incremental subgradient methods for nondifferentiable optimization
- Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Convergence of the surrogate Lagrangian relaxation method
- A second order nonsmooth variational model for restoring manifold-valued images
- Sparse reduced-rank regression for multivariate varying-coefficient models
- Bregman Finito/MISO for nonconvex regularized finite sum minimization without Lipschitz gradient continuity
- Subgradient algorithms on Riemannian manifolds of lower bounded curvatures
- A framework for parallel second order incremental optimization algorithms for solving partially separable problems
- scientific article; zbMATH DE number 7079312 (Why is no real title available?)
- Convergence analysis of incremental and parallel line search subgradient methods in Hilbert space
- Near-optimal stochastic approximation for online principal component estimation
- An asynchronous bundle-trust-region method for dual decomposition of stochastic mixed-integer programming
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- An attention algorithm for solving large scale structured \(l_0\)-norm penalty estimation problems
- Path-based incremental target level algorithm on Riemannian manifolds
- Decentralized hierarchical constrained convex optimization
- Proximal Newton-type methods for minimizing composite functions
- Composite optimization with coupling constraints via dual proximal gradient method with applications to asynchronous networks
- Proximal-proximal-gradient method
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems
- The Averaged Kaczmarz Iteration for Solving Inverse Problems
- Communication-efficient algorithms for decentralized and stochastic optimization
- An incremental decomposition method for unconstrained optimization
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Forward-reflected-backward method with variance reduction
- Analysis of stochastic gradient descent in continuous time
- Estimation of a sparse and spiked covariance matrix
- Parametric and semiparametric reduced-rank regression with flexible sparsity
- First-order methods for convex optimization
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
- Consensus-based distributed optimisation of multi-agent networks via a two level subgradient-proximal algorithm
- Rank reduction for high-dimensional generalized additive models
- A globally convergent incremental Newton method
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- A coordinate-descent primal-dual algorithm with large step size and possibly nonseparable functions
- DC programming and DCA: thirty years of developments
- Sparse and Low-Rank Matrix Quantile Estimation With Application to Quadratic Regression
- Two proximal splitting methods in Hadamard spaces
- A second-order TV-type approach for inpainting and denoising higher dimensional combined cyclic and vector space data
- A scaled incremental gradient method
- Splitting proximal with penalization schemes for additive convex hierarchical minimization problems
- A smooth inexact penalty reformulation of convex problems with linear constraints
- An optimal randomized incremental gradient method
- Limited-angle CT reconstruction with generalized shrinkage operators as regularizers
- Dual averaging with adaptive random projection for solving evolving distributed optimization problems
- Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
- Distributed proximal-gradient method for convex optimization with inequality constraints
- AIR tools II: algebraic iterative reconstruction methods, improved implementation
- Convergence rate of incremental subgradient algorithms
- Bregman methods for large-scale optimization with applications in imaging
- Old and new challenges in Hadamard spaces
- Gradient-free method for nonsmooth distributed optimization
- Stochastic gradient methods for \(L^2\)-Wasserstein least squares problem of Gaussian measures
- Relative Optimality Conditions and Algorithms for Treespace Fréchet Means
- Incremental Constraint Projection Methods for Monotone Stochastic Variational Inequalities
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs
- Combined first and second order variational approaches for image processing
- Stochastic first-order methods with random constraint projection
- An inexact semismooth Newton method on Riemannian manifolds with application to duality-based total variation denoising
- Hybrid deterministic-stochastic methods for data fitting
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Generalized row-action methods for tomographic imaging
- Discrete-time gradient flows and law of large numbers in Alexandrov spaces
- Total generalized variation for manifold-valued data
- Random algorithms for convex minimization problems
- Proximal gradient methods with adaptive subspace sampling
- On the convergence of stochastic primal-dual hybrid gradient
- On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- Ergodic convergence of a stochastic proximal point algorithm
- Statistical performance of quantile tensor regression with convex regularization
- Product of resolvents on Hadamard manifolds
- Hierarchical MPC schemes for periodic systems using stochastic programming
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space
- Global convergence rate of proximal incremental aggregated gradient methods
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality
- Proximal-like incremental aggregated gradient method with linear convergence under Bregman distance growth conditions
- Wavelet Sparse Regularization for Manifold-Valued Data
- Incremental without replacement sampling in nonconvex optimization
- Inexact proximal stochastic gradient method for convex composite optimization
- Sublinear convergence of a tamed stochastic gradient descent method in Hilbert space
- Non-smooth variational regularization for processing manifold-valued data
- Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Decentralized proximal splitting algorithms for composite constrained convex optimization
- Modified Fejér sequences and applications
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression
- Variance reduction techniques for stochastic proximal point algorithms
- Distributed optimization with information-constrained population dynamics
- Dual decomposition for multi-agent distributed optimization with coupling constraints
- Circuit analysis using monotone+skew splitting
- An asynchronous subgradient-proximal method for solving additive convex optimization problems
- A semismooth Newton stochastic proximal point algorithm with variance reduction
This page was built for publication: Incremental proximal methods for large scale convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q644913)