scientific article; zbMATH DE number 3850830
From MaRDI portal
Publication:3320132
Recommendations
- Rate of convergence of the method of feasible directions, not necessarily using the direction of steepest descent
- A descent method with the use uf duality for the solution of a convex programming problem in a Hilbert space
- scientific article; zbMATH DE number 4057292
- scientific article; zbMATH DE number 3847229
- scientific article; zbMATH DE number 2102650
Cited in
(only showing first 100 items - show all)- Fast convex optimization via inertial dynamics combining viscous and Hessian-driven damping with time rescaling
- Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems
- Convergence of the augmented decomposition algorithm
- A multiplicative weights update algorithm for packing and covering semi-infinite linear programs
- Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization
- Large-scale eigenvector approximation via Hilbert space embedding Nyström
- First-order optimization algorithms via inertial systems with Hessian driven damping
- Convergence rate analysis of proximal gradient methods with applications to composite minimization problems
- Stochastic heavy ball
- Stochastic accelerated alternating direction method of multipliers with importance sampling
- A generic online acceleration scheme for optimization algorithms via relaxation and inertia
- A highly efficient semismooth Newton augmented Lagrangian method for solving lasso problems
- On the convergence of a class of inertial dynamical systems with Tikhonov regularization
- Convergence rates of inertial forward-backward algorithms
- Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity
- A linear-time algorithm for the trust region subproblem based on hidden convexity
- The Differential Inclusion Modeling FISTA Algorithm and Optimality of Convergence Rate in the Case b $\leq3$
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Alternating direction based method for optimal control problem constrained by Stokes equation
- Accelerated differential inclusion for convex optimization
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Damped inertial dynamics with vanishing Tikhonov regularization: strong asymptotic convergence towards the minimum norm solution
- Convergence of iterates for first-order optimization algorithms with inertia and Hessian driven damping
- MAGMA: multilevel accelerated gradient mirror descent algorithm for large-scale convex composite minimization
- Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions
- Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- A projected extrapolated gradient method with larger step size for monotone variational inequalities
- A conjugate subgradient algorithm with adaptive preconditioning for the least absolute shrinkage and selection operator minimization
- Quadratic regularization projected Barzilai-Borwein method for nonnegative matrix factorization
- Mixed higher order variational model for image recovery
- Equivalent Lipschitz surrogates for zero-norm and rank optimization problems
- Convergence rate of inertial forward-backward algorithm beyond Nesterov's rule
- Accelerated parallel and distributed algorithm using limited internal memory for nonnegative matrix factorization
- Convergence of damped inertial dynamics governed by regularized maximally monotone operators
- An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix
- A cyclic projected gradient method
- Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient
- A fast image recovery algorithm based on splitting deblurring and denoising
- Accelerated proximal gradient method for elastoplastic analysis with von Mises yield criterion
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases
- A proximal difference-of-convex algorithm with extrapolation
- Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates
- A duality based approach to the minimizing total variation flow in the space \(H^{-s}\)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems
- Douglas-Rachford splitting and ADMM for nonconvex optimization: accelerated and Newton-type linesearch algorithms
- Iterative algorithms for total variation-like reconstructions in seismic tomography
- A new fast algorithm for constrained four-directional total variation image denoising problem
- Optimizing cluster structures with inner product induced norm based dissimilarity measures: theoretical development and convergence analysis
- On the effect of perturbations in first-order optimization methods with inertia and Hessian driven damping
- scientific article; zbMATH DE number 7306890 (Why is no real title available?)
- Accelerated primal-dual proximal block coordinate updating methods for constrained convex optimization
- On the reconstruction of media inhomogeneity by inverse wave scattering model
- The Shannon total variation
- Multiple change points detection in high-dimensional multivariate regression
- A stochastic gradient algorithm with momentum terms for optimal control problems governed by a convection-diffusion equation with random diffusivity
- Second-order stochastic optimization for machine learning in linear time
- Finding the nearest positive-real system
- Splitting and linearizing augmented Lagrangian algorithm for subspace recovery from corrupted observations
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On efficiently solving the subproblems of a level-set method for fused lasso problems
- Inexact Newton regularization combined with two-point gradient methods for nonlinear ill-posed problems
- On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope
- The approximate duality gap technique: a unified theory of first-order methods
- Is there an analog of Nesterov acceleration for gradient-based MCMC?
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
- A hadamard-stable extension of courant's sequential method for convex extremal problems
- Variational image regularization with Euler's elastica using a discrete gradient scheme
- Efficient multiplicative noise removal method using isotropic second order total variation
- Complexity of gradient descent for multiobjective optimization
- GMRES-accelerated ADMM for quadratic objectives
- A second-order cone based approach for solving the trust-region subproblem and its variants
- Proximal Gradient Methods for Machine Learning and Imaging
- An extension of the second order dynamical system that models Nesterov's convex gradient method
- Nonconvex robust programming via value-function optimization
- An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems
- A second-order adaptive Douglas-Rachford dynamic method for maximal \(\alpha\)-monotone operators
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs
- PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A fast two-point gradient method for solving non-smooth nonlinear ill-posed problems
- Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics
- Sharpness, restart, and acceleration
- Tikhonov regularization of a second order dynamical system with Hessian driven damping
- On the nonergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
- Some accelerated alternating proximal gradient algorithms for a class of nonconvex nonsmooth problems
- Multicomposite nonconvex optimization for training deep neural networks
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Potential reduction method for harmonically convex programming
- Accelerated gradient-free optimization methods with a non-Euclidean proximal operator
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3320132)