A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization
From MaRDI portal
Publication:5076721
DOI10.1287/moor.2021.1147zbMath1492.90104arXiv1804.05368OpenAlexW2797791799MaRDI QIDQ5076721
Farzad Yousefian, Afrooz Jalilzadeh, Angelia Nedić, Uday V. Shanbhag
Publication date: 17 May 2022
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1804.05368
Related Items
An overview of stochastic quasi-Newton methods for large-scale machine learning, An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Sample size selection in optimization methods for machine learning
- On the limited memory BFGS method for large scale optimization
- Variable-number sample-path optimization
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Globally convergent variable metric method for convex nonsmooth unconstrained minimization
- Gradient trust region algorithm with limited memory BFGS update for nonsmooth convex minimization
- The Sample Average Approximation Method for Stochastic Discrete Optimization
- Global Convergence of Online Limited Memory BFGS
- Strongly Convex Functions, Moreau Envelopes, and the Generic Nature of Convex Functions with Strong Minimizers
- A Quasi-Newton Approach to Nonsmooth Convex Optimization Problems in Machine Learning
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Smoothing and First Order Methods: A Unified Framework
- On Choosing Parameters in Retrospective-Approximation Algorithms for Stochastic Root Finding and Simulation Optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- Adaptive Sampling Strategies for Stochastic Optimization
- Variable-sample methods for stochastic optimization
- RES: Regularized Stochastic BFGS Algorithm
- First-Order Methods in Optimization
- On Sampling Rates in Simulation-Based Recursions
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities
- On Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate Analysis
- A Stochastic Line Search Method with Expected Complexity Analysis
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- New limited memory bundle method for large-scale nonsmooth optimization
- Proximité et dualité dans un espace hilbertien
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- A Stochastic Approximation Method