The Boosted Difference of Convex Functions Algorithm for Nonsmooth Functions
From MaRDI portal
Publication:5221263
Abstract: The Boosted Difference of Convex functions Algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical Difference of Convex functions Algorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. Firstly, to show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Secondly, to show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration, and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergent rate of the iterations are obtained under the Kurdyka-Lojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the Minimum Sum-of-Squares Clustering problem, BDCA was on average sixteen times faster than DCA, and for the Multidimensional Scaling problem, BDCA was three times faster than DCA.
Recommendations
- A new boosted proximal point algorithm for minimizing nonsmooth DC functions
- A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems
- Nonsmooth and nonconvex optimization via approximate difference-of-convex decompositions
- scientific article; zbMATH DE number 4076972
- scientific article; zbMATH DE number 3878691
- Stochastic difference-of-convex-functions algorithms for nonconvex programming
- An algorithm for linearly constrained convex nondifferentiable minimization problems
- On an algorithm in nondifferential convex optimization
- A new method for nonsmooth convex optimization
- An accelerated proximal algorithm for the difference of convex programming
Cites work
- scientific article; zbMATH DE number 3650320 (Why is no real title available?)
- scientific article; zbMATH DE number 46303 (Why is no real title available?)
- scientific article; zbMATH DE number 1535760 (Why is no real title available?)
- scientific article; zbMATH DE number 1795206 (Why is no real title available?)
- scientific article; zbMATH DE number 3381034 (Why is no real title available?)
- A D.C. Optimization Algorithm for Solving the Trust-Region Subproblem
- A DC programming approach for solving multicast network design problems via the Nesterov smoothing technique
- A general double-proximal gradient algorithm for d.c. programming
- A generalized proximal point algorithm for certain non-convex minimization problems
- A heuristic algorithm for solving the minimum sum-of-squares clustering problems
- A minimization method for the sum of a convex function and a continuously differentiable function
- Accelerating the DC algorithm for smooth functions
- An inertial algorithm for DC programming
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Clarke Subgradients of Stratifiable Functions
- Convergence analysis of a proximal point algorithm for minimizing differences of functions
- Convergence analysis of difference-of-convex algorithm with subanalytic data
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality
- Convex analysis approach to d. c. programming: Theory, algorithms and applications
- DC programming and DCA: thirty years of developments
- Nesterov's smoothing technique and minimizing differences of convex functions for hierarchical clustering
- Numerical solution for optimization over the efficient set by d.c. optimization algorithms
- On gradients of functions definable in o-minimal structures
- On the convergence of an approximate proximal method for DC functions
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Qualitative properties of the minimum sum-of-squares clustering problem
- The DC (Difference of convex functions) programming and DCA revisited with DC models of real world nonconvex optimization problems
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Transactions on Computational Collective Intelligence XIII
- Variational analysis and applications
Cited in
(25)- Accelerating the DC algorithm for smooth functions
- A boosted DC algorithm for non-differentiable DC components with non-monotone line search
- Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems
- The ABC of DC programming
- A new boosted proximal point algorithm for minimizing nonsmooth DC functions
- A unified Douglas-Rachford algorithm for generalized DC programming
- Incremental DC optimization algorithm for large-scale clusterwise linear regression
- The descent algorithm for solving the symmetric eigenvalue complementarity problem
- Open issues and recent advances in DC programming and DCA
- scientific article; zbMATH DE number 7523120 (Why is no real title available?)
- A refined inertial DC algorithm for DC programming
- Alternating DC algorithm for partial DC programming problems
- Finding robust minimizer for non-convex phase retrieval
- A variable metric and Nesterov extrapolated proximal DCA with backtracking for a composite DC program
- Using positive spanning sets to achieve d-stationarity with the boosted DC algorithm
- Solving \(k\)-center problems involving sets based on optimization techniques
- A boosted-DCA with power-sum-DC decomposition for linearly constrained polynomial programs
- A proximal alternating direction method of multipliers for DC programming with structured constraints
- Non-monotone boosted DC and Caputo fractional tailored finite point algorithm for Rician denoising and deblurring
- Difference-of-Convex Algorithms for a Class of Sparse Group $\ell_0$ Regularized Optimization Problems
- A unified Bregman alternating minimization algorithm for generalized DC programs with application to imaging
- The boosted DC algorithm for linearly constrained DC programming
- A bundle-type method for nonsmooth DC programs
- An inertial proximal point method for difference of maximal monotone vector fields in Hadamard manifolds
- An augmented subgradient method for minimizing nonsmooth DC functions
This page was built for publication: The Boosted Difference of Convex Functions Algorithm for Nonsmooth Functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5221263)