scientific article; zbMATH DE number 3850830
zbMATH Open0535.90071MaRDI QIDQ3320132FDOQ3320132
Authors: Yuri Nesterov
Publication date: 1983
Title of this publication is not available (Why is that?)
Recommendations
- Rate of convergence of the method of feasible directions, not necessarily using the direction of steepest descent
- A descent method with the use uf duality for the solution of a convex programming problem in a Hilbert space
- scientific article; zbMATH DE number 4057292
- scientific article; zbMATH DE number 3847229
- scientific article; zbMATH DE number 2102650
estimationconvergence rateconstrained minimizationglobal Lipschitz conditiondifferentiable objective functionconvex programming in Hilbert space
Numerical mathematical programming methods (65K05) Convex programming (90C25) Methods of successive quadratic programming type (90C55) Programming in abstract spaces (90C48) Inner product spaces and their generalizations, Hilbert spaces (46C99)
Cited In (only showing first 100 items - show all)
- Numerical computations of split Bregman method for fourth order total variation flow
- Nesterov-aided stochastic gradient methods using Laplace approximation for Bayesian design optimization
- A new class of accelerated regularization methods, with application to bioluminescence tomography
- On dissipative symplectic integration with applications to gradient-based optimization
- Analysis of a heuristic rule for the IRGNM in Banach spaces with convex regularization terms
- Ensemble Kalman inversion: a derivative-free technique for machine learning tasks
- A new Kaczmarz-type method and its acceleration for nonlinear ill-posed problems
- Title not available (Why is that?)
- Adaptive FISTA for Nonconvex Optimization
- Convergence rate of a relaxed inertial proximal algorithm for convex minimization
- Quantum entropic regularization of matrix-valued optimal transport
- Two New Inertial Algorithms for Solving Variational Inequalities in Reflexive Banach Spaces
- Title not available (Why is that?)
- Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems
- Error bound of critical points and KL property of exponent 1/2 for squared F-norm regularized factorization
- Inertial iterative algorithms for common solution of variational inequality and system of variational inequalities problems
- Accelerated information gradient flow
- Convergence of relaxed inertial subgradient extragradient methods for quasimonotone variational inequality problems
- EGC: entropy-based gradient compression for distributed deep learning
- \(\mathrm{B}\)-subdifferentials of the projection onto the matrix simplex
- Variational inequality over the set of common solutions of a system of bilevel variational inequality problem with applications
- Stochastic generalized gradient methods for training nonconvex nonsmooth neural networks
- Determining a time-dependent coefficient in a time-fractional diffusion-wave equation with the Caputo derivative by an additional integral condition
- An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing
- An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
- Iteration complexity of generalized complementarity problems
- Learning context-dependent choice functions
- A proximal point like method for solving tensor least-squares problems
- A piecewise conservative method for unconstrained convex optimization
- Understanding the acceleration phenomenon via high-resolution differential equations
- A proximal regularized Gauss-Newton-Kaczmarz method and its acceleration for nonlinear ill-posed problems
- Asymptotic for a second order evolution equation with damping and regularizing terms
- Iterative pre-conditioning for expediting the distributed gradient-descent method: the case of linear least-squares problem
- An accelerated viscosity forward-backward splitting algorithm with the linesearch process for convex minimization problems
- Two-stage geometric information guided image reconstruction
- Weak and strong convergence of inertial algorithms for solving split common fixed point problems
- The inertial relaxed algorithm with Armijo-type line search for solving multiple-sets split feasibility problem
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- An accelerated forward-backward algorithm with a new linesearch for convex minimization problems and its applications
- Scheduled restart momentum for accelerated stochastic gradient descent
- Alternating forward-backward splitting for linearly constrained optimization problems
- An algorithm for the split feasible problem and image restoration
- How does momentum benefit deep neural networks architecture design? A few case studies
- The common-directions method for regularized empirical risk minimization
- Computing ground states of Bose-Einstein condensates with higher order interaction via a regularized density function formulation
- Finite convergence of proximal-gradient inertial algorithms combining dry friction with Hessian-driven damping
- Unified acceleration of high-order algorithms under general Hölder continuity
- Improving ``fast iterative shrinkage-thresholding algorithm: faster, smarter, and greedier
- Regularized nonlinear acceleration
- An accelerated differential equation system for generalized equations
- Self adaptive inertial relaxed \(CQ\) algorithms for solving split feasibility problem with multiple output sets
- Convergence rates of first- and higher-order dynamics for solving linear ill-posed problems
- SRKCD: a stabilized Runge-Kutta method for stochastic optimization
- An inertial based forward-backward algorithm for monotone inclusion problems and split mixed equilibrium problems in Hilbert spaces
- An inertial Bregman generalized alternating direction method of multipliers for nonconvex optimization
- Adaptive Hamiltonian variational integrators and applications to symplectic accelerated optimization
- Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs
- An inertially constructed forward-backward splitting algorithm in Hilbert spaces
- Convergence rate of inertial proximal algorithms with general extrapolation and proximal coefficients
- Bregman Itoh-Abe methods for sparse optimisation
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
- Complexity of gradient descent for multiobjective optimization
- Preconditioned accelerated gradient descent methods for locally Lipschitz smooth objectives with applications to the solution of nonlinear PDEs
- Fast Proximal Methods via Time Scaling of Damped Inertial Dynamics
- An accelerated method for nonlinear elliptic PDE
- A finite element/operator-splitting method for the numerical solution of the two dimensional elliptic Monge-Ampère equation
- PDE acceleration: a convergence rate analysis and applications to obstacle problems
- Efficient multiplicative noise removal method using isotropic second order total variation
- On efficiently solving the subproblems of a level-set method for fused lasso problems
- Proximal Gradient Methods for Machine Learning and Imaging
- Some accelerated alternating proximal gradient algorithms for a class of nonconvex nonsmooth problems
- Directional total generalized variation regularization
- On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems
- A wavelet frame approach for removal of mixed Gaussian and impulse noise on surfaces
- Nesterov’s accelerated gradient method for nonlinear ill-posed problems with a locally convex residual functional
- Potential reduction method for harmonically convex programming
- Optimal convergence rates for Nesterov acceleration
- Two optimization approaches for solving split variational inclusion problems with applications
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- A second-order cone based approach for solving the trust-region subproblem and its variants
- Gradient descent finds the cubic-regularized nonconvex Newton step
- Sharpness, restart, and acceleration
- On the nonergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
- Multicomposite nonconvex optimization for training deep neural networks
- Robust accelerated gradient methods for smooth strongly convex functions
- Primal-dual accelerated gradient methods with small-dimensional relaxation oracle
- A dimension reduction technique for large-scale structured sparse optimization problems with application to convex clustering
- An extension of the second order dynamical system that models Nesterov's convex gradient method
- Nonconvex robust programming via value-function optimization
- An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems
- A second-order adaptive Douglas-Rachford dynamic method for maximal \(\alpha\)-monotone operators
- Nearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approach
- An inertial extrapolation method for solving generalized split feasibility problems in real Hilbert spaces
- Provable accelerated gradient method for nonconvex low rank optimization
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- GMRES-accelerated ADMM for quadratic objectives
- Contrast invariant SNR and isotonic regressions
- Lower bounds for finding stationary points I
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3320132)