Hessian barrier algorithms for linearly constrained optimization problems
DOI10.1137/18M1215682zbMATH Open1421.90164arXiv1809.09449OpenAlexW2970875067WikidataQ127324284 ScholiaQ127324284MaRDI QIDQ5233100FDOQ5233100
Authors: Immanuel M. Bomze, Panayotis Mertikopoulos, Werner Schachinger, Mathias Staudigl
Publication date: 16 September 2019
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.09449
Recommendations
- Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms
- scientific article; zbMATH DE number 399385
- Projected Hessian algorithm with backtracking interior point technique for linear constrained optimization
- New self-concordant barrier for the hypercube
- Hessian Riemannian Gradient Flows in Convex Programming
nonconvex optimizationinterior-point methodstraffic assignmentmirror descentHessian-Riemannian gradient descent
Convex programming (90C25) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Interior-point methods (90C51)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Title not available (Why is that?)
- Algorithmic Game Theory
- Primal-dual subgradient methods for convex problems
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Decoding by Linear Programming
- Title not available (Why is that?)
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Image deblurring with Poisson data: from cells to galaxies
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Degeneracy in interior point methods for linear programming: A survey
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Online learning and online convex optimization
- Title not available (Why is that?)
- Evolutionary game dynamics
- Dual subgradient algorithms for large-scale nonsmooth learning problems
- On Hessian Riemannian structures
- Evolution towards the maximum clique
- Regularized Lotka-Volterra dynamical system as continuous proximal-like method in optimization.
- A Trust Region Interior Point Algorithm for Linearly Constrained Optimization
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEM
- Hessian Riemannian Gradient Flows in Convex Programming
- Convex optimization: algorithms and complexity
- On the long time behavior of second order differential equations with asymptotically small dissipation
- Regularity versus Degeneracy in Dynamics, Games, and Optimization: A Unified Approach to Different Aspects
- Quadratic programming is in NP
- Singular Riemannian barrier methods and gradient-projection dynamical systems for constrained optimization
- Regularized Newton method for unconstrained convex optimization
- Global escape strategies for maximizing quadratic forms over a simplex
- Title not available (Why is that?)
- A first-order interior-point method for linearly constrained smooth optimization
- Barrier Operators and Associated Gradient-Like Dynamical Systems for Constrained Minimization Problems
- Convergence properties of Dikin's affine scaling algorithm for nonconvex quadratic minimization
- Nonmonotone projected gradient methods based on barrier and Euclidean distances
- A modification of Karmarkar's linear programming algorithm
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- The rate of convergence of Nesterov's accelerated forward-backward method is actually faster than \(1/k^2\)
- Riemannian game dynamics
- Learning in games with continuous action sets and unknown payoff functions
- On the convergence of gradient-like flows with noisy gradient input
- A variational perspective on accelerated methods in optimization
- The complexity of simple models -- a study of worst and typical hard cases for the standard quadratic optimization problem
- Distributed Stochastic Optimization via Matrix Exponential Learning
Cited In (6)
- First-order methods for convex optimization
- Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms
- Generalized self-concordant analysis of Frank-Wolfe algorithms
- Modification of the confidence bar algorithm based on approximations of the main diagonal of the Hessian matrix for solving optimal control problems
- Hessian barrier algorithms for non-convex conic optimization
- A Geodesic Interior-Point Method for Linear Optimization over Symmetric Cones
This page was built for publication: Hessian barrier algorithms for linearly constrained optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5233100)