A sub-sampled tensor method for nonconvex optimization
From MaRDI portal
Publication:6190818
Abstract: We present a stochastic optimization method that uses a fourth-order regularized model to find local minima of smooth and potentially non-convex objective functions with a finite-sum structure. This algorithm uses sub-sampled derivatives instead of exact quantities. The proposed approach is shown to find an -third-order critical point in at most iterations, thereby matching the rate of deterministic approaches. In order to prove this result, we derive a novel tensor concentration inequality for sums of tensors of any order that makes explicit use of the finite-sum structure of the objective function.
Recommendations
- Implementable tensor methods in unconstrained convex optimization
- Accelerated methods for nonconvex optimization
- Newton-type methods for non-convex optimization under inexact Hessian information
- An optimal high-order tensor method for convex optimization
- Introduction to high-order optimization methods
This page was built for publication: A sub-sampled tensor method for nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6190818)