Newton-type methods for non-convex optimization under inexact Hessian information

From MaRDI portal
Publication:2205970

DOI10.1007/s10107-019-01405-zzbMath1451.90134arXiv1708.07164OpenAlexW2963307318WikidataQ127827289 ScholiaQ127827289MaRDI QIDQ2205970

Yanyan Li

Publication date: 21 October 2020

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1708.07164




Related Items

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracyA stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationUnnamed ItemStochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic ModelsTensor Bernstein concentration inequalities with an application to sample estimators for high-order momentsCubic regularization methods with second-order complexity guarantee based on a new subproblem reformulationConvergence analysis of a subsampled Levenberg-Marquardt algorithmAn overview of stochastic quasi-Newton methods for large-scale machine learningInexact restoration with subsampled trust-region methods for finite-sum minimizationGlobally Convergent Multilevel Training of Deep Residual NetworksAn adaptive cubic regularization algorithm for computing H- and Z-eigenvalues of real even-order supersymmetric tensorsNewton-MR: inexact Newton method with minimum residual sub-problem solverFaster Riemannian Newton-type optimization by subsampling and cubic regularizationAdaptive sampling quasi-Newton methods for zeroth-order stochastic optimizationFirst-Order Methods for Nonconvex Quadratic MinimizationZeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle pointsThe impact of noise on evaluation complexity: the deterministic trust-region caseRecent Theoretical Advances in Non-Convex OptimizationGlobal Convergence of Policy Gradient Methods to (Almost) Locally Optimal PoliciesConvergence of Newton-MR under Inexact Hessian InformationAn algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexityA generalized worst-case complexity analysis for non-monotone line searchesAdaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivativesAn Inertial Newton Algorithm for Deep LearningAdaptive Regularization Algorithms with Inexact Evaluations for Nonconvex OptimizationA Stochastic Semismooth Newton Method for Nonsmooth Nonconvex OptimizationOn the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimizationLinesearch Newton-CG methods for convex optimization with noiseConvergence Analysis of Inexact Randomized Iterative MethodsUnnamed Item


Uses Software


Cites Work