Pages that link to "Item:Q2355332"
From MaRDI portal
The following pages link to Adaptive restart for accelerated gradient schemes (Q2355332):
Displaying 50 items.
- Proximal Methods for Sparse Optimal Scoring and Discriminant Analysis (Q97534) (← links)
- Conic optimization via operator splitting and homogeneous self-dual embedding (Q301735) (← links)
- Optimized first-order methods for smooth convex minimization (Q312663) (← links)
- Parallel Nesterov's method for large-scale minimization of partially separable functions (Q519774) (← links)
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure (Q1659678) (← links)
- Adaptive restart of the optimized gradient method for convex optimization (Q1670019) (← links)
- A globally convergent algorithm for nonconvex optimization based on block coordinate update (Q1676921) (← links)
- An adaptive three-term conjugate gradient method based on self-scaling memoryless BFGS matrix (Q1677473) (← links)
- Dual approaches to the minimization of strongly convex functionals with a simple structure under affine constraints (Q1683173) (← links)
- Acceleration of the PDHGM on partially strongly convex functions (Q1703999) (← links)
- How to choose biomarkers in view of parameter estimation (Q1711958) (← links)
- Convergence of first-order methods via the convex conjugate (Q1728354) (← links)
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization (Q1734766) (← links)
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property (Q1739040) (← links)
- Acceleration of the imaginary time method for spectrally computing the stationary states of Gross-Pitaevskii equations (Q1739125) (← links)
- Accelerated proximal gradient method for elastoplastic analysis with von Mises yield criterion (Q1742869) (← links)
- Linear convergence rates for variants of the alternating direction method of multipliers in smooth cases (Q1743535) (← links)
- A proximal difference-of-convex algorithm with extrapolation (Q1744881) (← links)
- Universal method for stochastic composite optimization problems (Q1746349) (← links)
- Asymptotic stabilization of inertial gradient dynamics with time-dependent viscosity (Q1785926) (← links)
- Proximal algorithms in statistics and machine learning (Q1790304) (← links)
- On the convergence of the iterates of proximal gradient algorithm with extrapolation for convex nonsmooth minimization problems (Q2010091) (← links)
- Local and global convergence of a general inertial proximal splitting scheme for minimizing composite functions (Q2013141) (← links)
- Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions (Q2020604) (← links)
- Stochastic optimization with adaptive restart: a framework for integrated local and global learning (Q2022223) (← links)
- On the interplay between acceleration and identification for the proximal gradient algorithm (Q2023654) (← links)
- Acceleration of primal-dual methods by preconditioning and simple subproblem procedures (Q2027970) (← links)
- An accelerated IRNN-iteratively reweighted nuclear norm algorithm for nonconvex nonsmooth low-rank minimization problems (Q2029679) (← links)
- A dual reformulation and solution framework for regularized convex clustering problems (Q2029898) (← links)
- Bounds for the tracking error of first-order online optimization methods (Q2032000) (← links)
- Functional penalised basis pursuit on spheres (Q2036409) (← links)
- Is there an analog of Nesterov acceleration for gradient-based MCMC? (Q2040101) (← links)
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences (Q2044479) (← links)
- Accelerated Bregman proximal gradient methods for relatively smooth convex optimization (Q2044481) (← links)
- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems (Q2044484) (← links)
- Accelerated information gradient flow (Q2053344) (← links)
- An accelerated smoothing gradient method for nonconvex nonsmooth minimization in image processing (Q2059822) (← links)
- An improved linear convergence of FISTA for the LASSO problem with application to CT image reconstruction (Q2060059) (← links)
- Sobolev gradients for the Möbius energy (Q2065714) (← links)
- An inexact proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth optimization problems (Q2067857) (← links)
- A piecewise conservative method for unconstrained convex optimization (Q2070340) (← links)
- Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems (Q2070400) (← links)
- A strongly convergent algorithm for solving common variational inclusion with application to image recovery problems (Q2073956) (← links)
- Alternating direction based method for optimal control problem constrained by Stokes equation (Q2075142) (← links)
- Some modified fast iterative shrinkage thresholding algorithms with a new adaptive non-monotone stepsize strategy for nonsmooth and convex minimization problems (Q2082554) (← links)
- Understanding the acceleration phenomenon via high-resolution differential equations (Q2089769) (← links)
- From differential equation solvers to accelerated first-order methods for convex optimization (Q2089788) (← links)
- High-performance optimal incentive-seeking in transactive control for traffic congestion (Q2095352) (← links)
- Online optimization of switched LTI systems using continuous-time and hybrid accelerated gradient flows (Q2097731) (← links)
- The superiorization method with restarted perturbations for split minimization problems with an application to radiotherapy treatment planning (Q2101916) (← links)