scientific article; zbMATH DE number 3850830
From MaRDI portal
Publication:3320132
Recommendations
- Rate of convergence of the method of feasible directions, not necessarily using the direction of steepest descent
- A descent method with the use uf duality for the solution of a convex programming problem in a Hilbert space
- scientific article; zbMATH DE number 4057292
- scientific article; zbMATH DE number 3847229
- scientific article; zbMATH DE number 2102650
Cited in
(only showing first 100 items - show all)- On efficiently solving the subproblems of a level-set method for fused lasso problems
- Inexact Newton regularization combined with two-point gradient methods for nonlinear ill-posed problems
- On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope
- The approximate duality gap technique: a unified theory of first-order methods
- Is there an analog of Nesterov acceleration for gradient-based MCMC?
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences
- A hadamard-stable extension of courant's sequential method for convex extremal problems
- Variational image regularization with Euler's elastica using a discrete gradient scheme
- Efficient multiplicative noise removal method using isotropic second order total variation
- Complexity of gradient descent for multiobjective optimization
- GMRES-accelerated ADMM for quadratic objectives
- A second-order cone based approach for solving the trust-region subproblem and its variants
- Proximal Gradient Methods for Machine Learning and Imaging
- An extension of the second order dynamical system that models Nesterov's convex gradient method
- Nonconvex robust programming via value-function optimization
- An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems
- A second-order adaptive Douglas-Rachford dynamic method for maximal \(\alpha\)-monotone operators
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs
- PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A fast two-point gradient method for solving non-smooth nonlinear ill-posed problems
- Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics
- Sharpness, restart, and acceleration
- Tikhonov regularization of a second order dynamical system with Hessian driven damping
- On the nonergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
- Some accelerated alternating proximal gradient algorithms for a class of nonconvex nonsmooth problems
- Multicomposite nonconvex optimization for training deep neural networks
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Potential reduction method for harmonically convex programming
- Accelerated gradient-free optimization methods with a non-Euclidean proximal operator
- Sublinear-time quadratic minimization via spectral decomposition of matrices
- An accelerated method for nonlinear elliptic PDE
- Robust accelerated gradient methods for smooth strongly convex functions
- Optimal convergence rates for Nesterov acceleration
- An inertial Mann forward-backward splitting algorithm of variational inclusion problems and its applications
- Clustering of fuzzy data and simultaneous feature selection: a model selection approach
- Sparse adaptive parameterization of variability in image ensembles
- A wavelet frame approach for removal of mixed Gaussian and impulse noise on surfaces
- Learning partial differential equations via data discovery and sparse optimization
- Viscosity \(S\)-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- Contrast invariant SNR and isotonic regressions
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- A nonmonotone gradient algorithm for total variation image denoising problems
- Convergence rates of a dual gradient method for constrained linear ill-posed problems
- Asymptotic equivalence of evolution equations governed by cocoercive operators and their forward discretizations
- A dimension reduction technique for large-scale structured sparse optimization problems with application to convex clustering
- Lower bounds for finding stationary points I
- scientific article; zbMATH DE number 7306860 (Why is no real title available?)
- Directional total generalized variation regularization
- Nesterov’s accelerated gradient method for nonlinear ill-posed problems with a locally convex residual functional
- Alternating minimization methods for strongly convex optimization
- On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems
- Two optimization approaches for solving split variational inclusion problems with applications
- Provable accelerated gradient method for nonconvex low rank optimization
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- Solving the OSCAR and SLOPE models using a semismooth Newton-based augmented Lagrangian method
- A finite element/operator-splitting method for the numerical solution of the two dimensional elliptic Monge-Ampère equation
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- Finite convergence of proximal-gradient inertial algorithms combining dry friction with Hessian-driven damping
- Variational inequality over the set of common solutions of a system of bilevel variational inequality problem with applications
- Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks
- Determining a time-dependent coefficient in a time-fractional diffusion-wave equation with the Caputo derivative by an additional integral condition
- An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing
- Differentially private distributed logistic regression with the objective function perturbation
- Research on three-step accelerated gradient algorithm in deep learning
- Relaxed inertial methods for solving the split monotone variational inclusion problem beyond co-coerciveness
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Analysis of a heuristic rule for the IRGNM in Banach spaces with convex regularization terms
- Faster response in bounded-update-rate, discrete-time linear networks using delayed self-reinforcement
- Unified acceleration of high-order algorithms under general Hölder continuity
- Numerical computations of split Bregman method for fourth order total variation flow
- On the strong convergence of the trajectories of a Tikhonov regularized second order dynamical system with asymptotically vanishing damping
- On the strong convergence of continuous Newton-like inertial dynamics with Tikhonov regularization for monotone inclusions
- Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates
- Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients
- An ordinary differential equation for modeling Halpern fixed-point Algorithm
- A proximal regularized Gauss-Newton-Kaczmarz method and its acceleration for nonlinear ill-posed problems
- Regularized nonlinear acceleration
- An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications
- Improving ``fast iterative shrinkage-thresholding algorithm: faster, smarter, and greedier
- An accelerated differential equation system for generalized equations
- Self adaptive inertial relaxed \(CQ\) algorithms for solving split feasibility problem with multiple output sets
- Convergence rates of first- and higher-order dynamics for solving linear ill-posed problems
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- scientific article; zbMATH DE number 7370590 (Why is no real title available?)
- Bregman Itoh-Abe methods for sparse optimisation
- Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization
- Inertial iterative algorithms for common solution of variational inequality and system of variational inequalities problems
- Accelerated information gradient flow
- Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems
- EGC: entropy-based gradient compression for distributed deep learning
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3320132)