Hessian barrier algorithms for linearly constrained optimization problems
From MaRDI portal
Publication:5233100
Abstract: In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is for some that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.
Recommendations
- Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms
- scientific article; zbMATH DE number 399385
- Projected Hessian algorithm with backtracking interior point technique for linear constrained optimization
- New self-concordant barrier for the hypercube
- Hessian Riemannian Gradient Flows in Convex Programming
Cites work
- scientific article; zbMATH DE number 4202017 (Why is no real title available?)
- scientific article; zbMATH DE number 4215340 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 729680 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A Trust Region Interior Point Algorithm for Linearly Constrained Optimization
- A first-order interior-point method for linearly constrained smooth optimization
- A modification of Karmarkar's linear programming algorithm
- A variational perspective on accelerated methods in optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Algorithmic Game Theory
- Barrier Operators and Associated Gradient-Like Dynamical Systems for Constrained Minimization Problems
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence properties of Dikin's affine scaling algorithm for nonconvex quadratic minimization
- Convex optimization: algorithms and complexity
- Decoding by Linear Programming
- Degeneracy in interior point methods for linear programming: A survey
- Distributed Stochastic Optimization via Matrix Exponential Learning
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- Evolution towards the maximum clique
- Evolutionary game dynamics
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Global escape strategies for maximizing quadratic forms over a simplex
- Hessian Riemannian Gradient Flows in Convex Programming
- Image deblurring with Poisson data: from cells to galaxies
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Learning in games with continuous action sets and unknown payoff functions
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Nonmonotone projected gradient methods based on barrier and Euclidean distances
- On Hessian Riemannian structures
- On the convergence of gradient-like flows with noisy gradient input
- On the long time behavior of second order differential equations with asymptotically small dissipation
- Online learning and online convex optimization
- Primal-dual subgradient methods for convex problems
- Quadratic programming is in NP
- Regularity versus Degeneracy in Dynamics, Games, and Optimization: A Unified Approach to Different Aspects
- Regularized Lotka-Volterra dynamical system as continuous proximal-like method in optimization.
- Regularized Newton method for unconstrained convex optimization
- Riemannian game dynamics
- Singular Riemannian barrier methods and gradient-projection dynamical systems for constrained optimization
- THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEM
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The complexity of simple models -- a study of worst and typical hard cases for the standard quadratic optimization problem
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
Cited in
(6)- Generalized self-concordant analysis of Frank-Wolfe algorithms
- Modification of the confidence bar algorithm based on approximations of the main diagonal of the Hessian matrix for solving optimal control problems
- Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms
- Hessian barrier algorithms for non-convex conic optimization
- First-order methods for convex optimization
- A Geodesic Interior-Point Method for Linear Optimization over Symmetric Cones
This page was built for publication: Hessian barrier algorithms for linearly constrained optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5233100)