Practical inexact proximal quasi-Newton method with global complexity analysis

From MaRDI portal
Publication:344963

DOI10.1007/s10107-016-0997-3zbMath1366.90166arXiv1311.6547OpenAlexW2962748029MaRDI QIDQ344963

Katya Scheinberg, Xiaocheng Tang

Publication date: 25 November 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1311.6547




Related Items (31)

An inexact successive quadratic approximation method for a class of difference-of-convex optimization problemsForward-backward quasi-Newton methods for nonsmooth optimization problemsPerformance of first- and second-order methods for \(\ell_1\)-regularized least squares problemsSecond order semi-smooth proximal Newton methods in Hilbert spacesA flexible coordinate descent methodA Reduced-Space Algorithm for Minimizing $\ell_1$-Regularized Convex FunctionsCOAP 2021 best paper prizeInexact successive quadratic approximation for regularized optimizationGlobal convergence of a family of modified BFGS methods under a modified weak-Wolfe-Powell line search for nonconvex functionsInexact proximal stochastic gradient method for convex composite optimizationInexact proximal DC Newton-type method for nonconvex composite functionsA globally convergent proximal Newton-type method in nonsmooth convex optimizationAccelerating inexact successive quadratic approximation for regularized optimization through manifold identificationMinimizing oracle-structured composite functionsInexact proximal Newton methods in Hilbert spacesGlobal complexity analysis of inexact successive quadratic approximation methods for regularized optimization under mild assumptionsA proximal quasi-Newton method based on memoryless modified symmetric rank-one formulaA Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex OptimizationA family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound propertyFaRSA for ℓ1-regularized convex optimization: local convergence and numerical experienceOptimization Methods for Large-Scale Machine LearningInexact variable metric stochastic block-coordinate descent for regularized optimizationProximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence ratesInexact proximal Newton methods for self-concordant functionsOptimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimizationSecond-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimizationInexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functionsGlobalized inexact proximal Newton-type methods for nonconvex composite functionsAn Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationFused Multiple Graphical LassoOne-Step Estimation with Scaled Proximal Methods


Uses Software


Cites Work


This page was built for publication: Practical inexact proximal quasi-Newton method with global complexity analysis