A simplified view of first order methods for optimization
From MaRDI portal
Publication:1650767
DOI10.1007/s10107-018-1284-2zbMath1391.90482OpenAlexW2800693632WikidataQ129833120 ScholiaQ129833120MaRDI QIDQ1650767
Publication date: 13 July 2018
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-018-1284-2
first order algorithmsconvex and nonconvex minimizationdescent lemmaKurdyka-Łosiajewicz propertynon-Euclidean Bregman distanceproximal framework
Related Items
Additive Schwarz methods for convex optimization with backtracking, Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Nonconvex Optimization, A dynamic alternating direction of multipliers for nonconvex minimization with nonlinear functional equality constraints, Revisiting linearized Bregman iterations under Lipschitz-like convexity condition, Optimal complexity and certification of Bregman first-order methods, Fast proximal algorithms for nonsmooth convex optimization, Some brief observations in minimizing the sum of locally Lipschitzian functions, Accelerated additive Schwarz methods for convex optimization with adaptive restart, Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant, Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists, Some extensions of the operator splitting schemes based on Lagrangian and primal–dual: a unified proximal point analysis, Affine Invariant Convergence Rates of the Conditional Gradient Method, Stochastic composition optimization of functions without Lipschitz continuous gradient, No-regret algorithms in on-line learning, games and convex optimization, A generalized forward-backward splitting operator: degenerate analysis and applications, Bregman-Golden ratio algorithms for variational inequalities, An elementary approach to tight worst case complexity analysis of gradient based methods, An alternating structure-adapted Bregman proximal gradient descent algorithm for constrained nonconvex nonsmooth optimization problems and its inertial variant, Provable Phase Retrieval with Mirror Descent, Smoothing fast proximal gradient algorithm for the relaxation of matrix rank regularization problem, First-order methods for convex optimization, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Bregman proximal point type algorithms for quasiconvex minimization, A simple nearly optimal restart scheme for speeding up first-order methods, Bregman three-operator splitting methods, Inexact proximal \(\epsilon\)-subgradient methods for composite convex optimization problems, The chain rule for VU-decompositions of nonsmooth functions, New characterizations of Hoffman constants for systems of linear constraints, Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems, A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods, The condition number of a function relative to a set, A Data-Independent Distance to Infeasibility for Linear Conic Systems, Accelerated Bregman proximal gradient methods for relatively smooth convex optimization, Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization, A block inertial Bregman proximal algorithm for nonsmooth nonconvex problems with application to symmetric nonnegative matrix tri-factorization, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, On linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuity, A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima, A dual Bregman proximal gradient method for relatively-strongly convex optimization, On the nonexpansive operators based on arbitrary metric: a degenerate analysis, Dual Space Preconditioning for Gradient Descent, Stochastic proximal linear method for structured non-convex problems, Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A proximal method for composite minimization
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Regularized Lotka-Volterra dynamical system as continuous proximal-like method in optimization.
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space
- Ergodic convergence to a zero of the sum of monotone operators in Hilbert space
- On gradients of functions definable in o-minimal structures
- Proximal minimization algorithm with \(D\)-functions
- Nonlinear rescaling and proximal-like methods in convex optimization
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Forward-backward splitting with Bregman distances
- Interior projection-like methods for monotone variational inequalities
- The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
- A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
- Smoothing and First Order Methods: A Unified Framework
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- A generalized proximal point algorithm for certain non-convex minimization problems
- On the Convergence of the Proximal Point Algorithm for Convex Minimization
- Entropic Proximal Mappings with Applications to Nonlinear Programming
- Monotone Operators and the Proximal Point Algorithm
- Variational Analysis
- Convergence of Proximal-Like Algorithms
- A Generalized Proximal Point Algorithm for the Variational Inequality Problem in a Hilbert Space
- Barrier Operators and Associated Gradient-Like Dynamical Systems for Constrained Minimization Problems
- First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems
- First-Order Methods in Optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming
- Convex Optimization in Signal Processing and Communications
- Interior Proximal and Multiplier Methods Based on Second Order Homogeneous Kernels
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Signal Recovery by Proximal Forward-Backward Splitting
- Proximité et dualité dans un espace hilbertien
- Weak convergence of the sequence of successive approximations for nonexpansive mappings
- Convex Analysis
- Interior Gradient and Epsilon-Subgradient Descent Methods for Constrained Convex Minimization
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- Convex analysis and monotone operator theory in Hilbert spaces