Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers
From MaRDI portal
Publication:2425163
Abstract: Many optimization algorithms converge to stationary points. When the underlying problem is nonconvex, they may get trapped at local minimizers and occasionally stagnate near saddle points. We propose the Run-and-Inspect Method, which adds an "inspect" phase to existing algorithms that helps escape from non-global stationary points. The inspection samples a set of points in a radius around the current point. When a sample point yields a sufficient decrease in the objective, we move there and resume an existing algorithm. If no sufficient decrease is found, the current point is called an approximate -local minimizer. We show that an -local minimizer is globally optimal, up to a specific error depending on , if the objective function can be implicitly decomposed into a smooth convex function plus a restricted function that is possibly nonconvex, nonsmooth. For high-dimensional problems, we introduce blockwise inspections to overcome the curse of dimensionality while still maintaining optimality bounds up to a factor equal to the number of blocks. Our method performs well on a set of artificial and realistic nonconvex problems by coupling with gradient descent, coordinate descent, EM, and prox-linear algorithms.
Recommendations
- On a local and global search involved in nonconvex optimization problems
- Local minima of nonconvex problems
- scientific article; zbMATH DE number 7195209
- scientific article; zbMATH DE number 3887457
- scientific article; zbMATH DE number 1054667
- On duality bound methods for nonconvex global optimization
- On a class of nonconvex problems where all local minima are global
- Local convexification of the Lagrangian function for nonlinear nonconvex optimization
- From global to local convergence of interior methods for nonlinear optimization
- Approximate global minimization of nonconvex functions that are close to convex
Cites work
- scientific article; zbMATH DE number 3313108 (Why is no real title available?)
- A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Coordinate-friendly structures, algorithms and applications
- Cubic regularization of Newton method and its global performance
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Entropy-SGD: biasing gradient descent into wide valleys
- GAITA: a Gauss-Seidel iterative thresholding algorithm for \(\ell_q\) regularized least squares regression
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Gradient descent only converges to minimizers: non-isolated critical points and invariant regions
- Introduction to Derivative-Free Optimization
- Nearly unbiased variable selection under minimax concave penalty
- Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
- Optimization by simulated annealing
- Stochastic backward Euler: an implicit gradient descent algorithm for \(k\)-means clustering
- Trust Region Methods
This page was built for publication: Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2425163)