Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
From MaRDI portal
Publication:2010105
DOI10.1007/S10898-019-00818-6zbMATH Open1432.90112arXiv1710.06073OpenAlexW3102429306MaRDI QIDQ2010105FDOQ2010105
Authors: Carisa Kwok Wai Yu, Yaohua Hu, Xiao Qi Yang
Publication date: 3 December 2019
Published in: Journal of Global Optimization (Search for Journal in Brave)
Abstract: The sum of ratios problem has a variety of important applications in economics and management science, but it is difficult to globally solve this problem. In this paper, we consider the minimization problem of a sum of a number of nondifferentiable quasi-convex component functions over a closed and convex set, which includes the sum of ratios problem as a special case. The sum of quasi-convex component functions is not necessarily to be quasi-convex, and so, this study goes beyond quasi-convex optimization. Exploiting the structure of the sum-minimization problem, we propose a new incremental subgradient method for this problem and investigate its convergence properties to a global optimal solution when using the constant, diminishing or dynamic stepsize rules and under a homogeneous assumption and the H"{o}lder condition of order . To economize on the computation cost of subgradients of a large number of component functions, we further propose a randomized incremental subgradient method, in which only one component function is randomly selected to construct the subgradient direction at each iteration. The convergence properties are obtained in terms of function values and distances of iterates from the optimal solution set with probability 1. The proposed incremental subgradient methods are applied to solve the sum of ratios problem, as well as the multiple Cobb-Douglas productions efficiency problem, and the numerical results show that the proposed methods are efficient for solving the large sum of ratios problem.
Full work available at URL: https://arxiv.org/abs/1710.06073
Recommendations
- Inexact subgradient methods for quasi-convex optimization problems
- Stochastic subgradient method for quasi-convex optimization problems
- Quasi-convex feasibility problems: subgradient methods and convergence rates
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Convergence and efficiency of subgradient methods for quasiconvex minimization
incremental approachsubgradient methodquasi-convex programmingsum of ratios problemsum-minimization problem
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- On Projection Algorithms for Solving Convex Feasibility Problems
- Primal-dual subgradient methods for convex problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Title not available (Why is that?)
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Generalized concavity
- Solving the sum-of-ratios problem by an interior-point method
- Fractional programming: The sum-of-ratios case
- Title not available (Why is that?)
- Title not available (Why is that?)
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Handbook of generalized convexity and generalized monotonicity
- Incremental subgradient methods for nondifferentiable optimization
- BOND PORTFOLIO OPTIMIZATION BY BILINEAR FRACTIONAL PROGRAMMING
- Incremental proximal methods for large scale convex optimization
- The effect of deterministic noise in subgradient methods
- A Unified Augmented Lagrangian Approach to Duality and Exact Penalization
- Inexact subgradient methods for quasi-convex optimization problems
- Incremental subgradients for constrained convex optimization: A unified framework and new methods
- Branch-and-bound outer approximation algorithm for sum-of-ratios fractional programs
- Conditional subgradient optimization -- theory and applications
- Incremental stochastic subgradient algorithms for convex optimization
- Convergence and efficiency of subgradient methods for quasiconvex minimization
- Projected subgradient methods with non-Euclidean distances for non-differentiable convex minimization and variational inequalities
- Algorithms for the quasiconvex feasibility problem
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Stochastic subgradient method for quasi-convex optimization problems
- On properties of supporting and quasi-supporting vectors
- On convergence rates of linearized proximal algorithms for convex composite optimization with applications
- Title not available (Why is that?)
- On Convergence Properties of a Subgradient Method
- Abstract convergence theorem for quasi-convex optimization problems with applications
- Normal characterization of the main classes of quasiconvex functions
- Fractional Programming with Homogeneous Functions
- Adaptive subgradient method for the split quasi-convex feasibility problems
- Normalized Incremental Subgradient Algorithm and Its Application
Cited In (9)
- Multiple-sets split quasi-convex feasibility problems: Adaptive subgradient methods with convergence guarantee
- Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems
- Stochastic subgradient algorithm for nonsmooth nonconvex optimization
- Incremental Quasi-Subgradient Method for Minimizing Sum of Geodesic Quasi-Convex Functions on Riemannian Manifolds with Applications
- An algorithm for stochastic convex-concave fractional programs with applications to production efficiency and equitable resource allocation
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem
- Interior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spaces
- A subgradient projection method for quasiconvex minimization
- The equivalence of three types of error bounds for weakly and approximately convex functions
Uses Software
This page was built for publication: Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2010105)