Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
From MaRDI portal
Publication:2330649
DOI10.1007/s10107-018-1290-4zbMath1423.90248arXiv1702.04300OpenAlexW2591820521MaRDI QIDQ2330649
Gabriel Haeser, Hongcheng Liu, Yinyu Ye
Publication date: 22 October 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1702.04300
constrained optimizationnonconvex programminginterior point methodnonsmooth problemsfirst order algorithm
Analysis of algorithms and problem complexity (68Q25) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Interior-point methods (90C51)
Related Items
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints, Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization, A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees, A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees, Recent Theoretical Advances in Non-Convex Optimization, On constrained optimization with nonconvex regularization, Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints, On the use of Jordan algebras for improving global convergence of an augmented Lagrangian method in nonlinear semidefinite programming, Sparse Solutions by a Quadratically Constrained ℓq (0 <q< 1) Minimization Model, On Optimality Conditions for Nonlinear Conic Programming
Cites Work
- Unnamed Item
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- A relaxed constant positive linear dependence constraint qualification and applications
- A new polynomial-time algorithm for linear programming
- On affine scaling algorithms for nonconvex quadratic programming
- On the complexity of approximating a KKT point of quadratic programming
- Introductory lectures on convex optimization. A basic course.
- A second-order optimality condition with first- and second-order complementarity associated with global convergence of algorithms
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- New constraint qualifications with second-order properties in nonlinear optimization
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- Calibrating nonconvex penalized regression in ultra-high dimension
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Cubic regularization of Newton method and its global performance
- Strong oracle optimality of folded concave penalized estimation
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Worst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization
- Lower Bound Theory of Nonzero Entries in Solutions of $\ell_2$-$\ell_p$ Minimization
- A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Introduction to the Theory of Nonlinear Optimization
- Linearly Constrained Non-Lipschitz Optimization for Image Restoration
- A Cone-Continuity Constraint Qualification and Algorithmic Consequences
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Computing Optimal Locally Constrained Steps
- Newton’s Method with a Model Trust Region Modification
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Accelerated Methods for NonConvex Optimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Optimality and Complexity for Constrained Optimization Problems with Nonconvex Regularization
- On High-order Model Regularization for Constrained Optimization
- A second-order sequential optimality condition associated to the convergence of optimization algorithms
- Two New Weak Constraint Qualifications and Applications
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Mesh Adaptive Direct Search Algorithms for Constrained Optimization
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- Penalty Methods for a Class of Non-Lipschitz Optimization Problems
- On sequential optimality conditions for smooth constrained optimization
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers