Convergence rates of subgradient methods for quasi-convex optimization problems
From MaRDI portal
Abstract: Quasi-convex optimization acts a pivotal part in many fields including economics and finance; the subgradient method is an effective iterative algorithm for solving large-scale quasi-convex optimization problems. In this paper, we investigate the iteration complexity and convergence rates of various subgradient methods for solving quasi-convex optimization in a unified framework. In particular, we consider a sequence satisfying a general (inexact) basic inequality, and investigate the global convergence theorem and the iteration complexity when using the constant, diminishing or dynamic stepsize rules. More importantly, we establish the linear (or sublinear) convergence rates of the sequence under an additional assumption of weak sharp minima of H"{o}lderian order and upper bounded noise. These convergence theorems are applied to establish the iteration complexity and convergence rates of several subgradient methods, including the standard/inexact/conditional subgradient methods, for solving quasi-convex optimization problems under the assumptions of the H"{o}lder condition and/or the weak sharp minima of H"{o}lderian order.
Recommendations
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Quasi-convex feasibility problems: subgradient methods and convergence rates
- Inexact subgradient methods for quasi-convex optimization problems
- Conditional subgradient methods for constrained quasi-convex optimization problems
- Stochastic subgradient method for quasi-convex optimization problems
Cites work
- scientific article; zbMATH DE number 4015993 (Why is no real title available?)
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 1095224 (Why is no real title available?)
- scientific article; zbMATH DE number 3435272 (Why is no real title available?)
- scientific article; zbMATH DE number 3894826 (Why is no real title available?)
- scientific article; zbMATH DE number 3282977 (Why is no real title available?)
- A Unified Augmented Lagrangian Approach to Duality and Exact Penalization
- A generalized subgradient method with relaxation step
- A linear scalarization proximal point method for quasiconvex multiobjective minimization
- A scalarization proximal point method for quasiconvex multiobjective minimization
- A unified approach to error bounds for structured convex optimization problems
- Abstract convergence theorem for quasi-convex optimization problems with applications
- An extension of proximal methods for quasiconvex minimization on the nonnegative orthant
- An inexact proximal method for quasiconvex minimization
- Conditional subgradient methods for constrained quasi-convex optimization problems
- Conditional subgradient optimization -- theory and applications
- Convergence and efficiency of subgradient methods for quasiconvex minimization
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Convergence rate of incremental subgradient algorithms
- Dual subgradient method with averaging for optimal resource allocation
- Faster subgradient methods for functions with Hölderian growth
- Finite termination of the proximal point algorithm
- From error bounds to the complexity of first-order descent methods for convex functions
- Generalized concavity
- Handbook of generalized convexity and generalized monotonicity
- Incremental subgradient methods for nondifferentiable optimization
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- Inexact subgradient methods for quasi-convex optimization problems
- Linear convergence of CQ algorithms and applications in gene regulatory network inference
- Linear convergence of epsilon-subgradient descent methods for a class of convex functions
- Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- Nonlinear programming methods in the presence of noise
- Normal characterization of the main classes of quasiconvex functions
- On Convergence Properties of a Subgradient Method
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- On convergence rates of subgradient optimization methods
- On properties of supporting and quasi-supporting vectors
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Portfolio optimization with quasiconvex risk measures
- Primal-dual subgradient methods for convex problems
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- Proximal point algorithms on Hadamard manifolds: linear convergence and finite termination
- Proximal point methods for quasiconvex and convex functions with Bregman distances on Hadamard manifolds
- Steepest descent method with a generalized Armijo search for quasiconvex functions on Riemannian manifolds
- Subgradient methods for saddle point problems of quasiconvex optimization
- Subgradient methods for saddle-point problems
- Surpassing gradient descent provably: a cyclic incremental method with linear convergence rate
- Weak Sharp Minima in Mathematical Programming
- Weak Sharp Minima: Characterizations and Sufficient Conditions
- Weak sharp minima revisited. I: Basic theory
- \(\ell _p\) regularized low-rank approximation via iterative reweighted singular value minimization
Cited in
(18)- Convergence Rate of an Optimization Algorithm for Minimizing Quadratic Functions with Separable Convex Constraints
- Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems
- Quasi-subgradient methods with Bregman distance for quasi-convex feasibility problems
- Self-adaptive algorithms for quasiconvex programming and applications to machine learning
- Quasi-convex feasibility problems: subgradient methods and convergence rates
- Rates of Convergence for Quasi-Additive Smooth Euclidean Functionals and Application to Combinatorial Optimization Problems
- Accelerated methods for weakly-quasi-convex optimization problems
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- Convergence of inexact quasisubgradient methods with extrapolation
- Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces
- Multiple-sets split quasi-convex feasibility problems: adaptive subgradient methods with convergence guarantee
- A subgradient projection method for quasiconvex minimization
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Convergence rate of a rectangular subdivision-based optimization algorithm for smooth multivariate functions
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Inexact subgradient methods for quasi-convex optimization problems
- Conditional subgradient methods for constrained quasi-convex optimization problems
This page was built for publication: Convergence rates of subgradient methods for quasi-convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q782917)