Hybrid Deterministic-Stochastic Methods for Data Fitting
From MaRDI portal
Publication:2909270
DOI10.1137/110830629zbMath1262.90090arXiv1104.2373OpenAlexW3103657382MaRDI QIDQ2909270
Michael P. Friedlander, Mark Schmidt
Publication date: 30 August 2012
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1104.2373
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items
A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients ⋮ A fully stochastic second-order trust region method ⋮ A frequency-domain analysis of inexact gradient methods ⋮ Ritz-like values in steplength selections for stochastic gradient methods ⋮ Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling ⋮ Adaptive Sampling Strategies for Stochastic Optimization ⋮ A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization ⋮ Seismic iterative migration velocity analysis: two strategies to update the velocity model ⋮ Robust optimal control for a batch nonlinear enzyme-catalytic switched time-delayed process with noisy output measurements ⋮ Subsampled nonmonotone spectral gradient methods ⋮ Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization ⋮ Inexact first-order primal-dual algorithms ⋮ An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians ⋮ Semi-discrete optimal transport: hardness, regularization and numerical solution ⋮ Convergence analysis of a subsampled Levenberg-Marquardt algorithm ⋮ Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming ⋮ A trust region method for noisy unconstrained optimization ⋮ An adaptive sampling augmented Lagrangian method for stochastic optimization with deterministic constraints ⋮ A framework of convergence analysis of mini-batch stochastic projected gradient methods ⋮ Inexact restoration with subsampled trust-region methods for finite-sum minimization ⋮ On Synchronous, Asynchronous, and Randomized Best-Response Schemes for Stochastic Nash Games ⋮ Block mirror stochastic gradient method for stochastic optimization ⋮ Projected Nonlinear Least Squares for Exponential Fitting ⋮ Hessian averaging in stochastic Newton methods achieves superlinear convergence ⋮ On Sampling Rates in Simulation-Based Recursions ⋮ Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions ⋮ Automatic alignment for three-dimensional tomographic reconstruction ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ Sub-sampled Newton methods ⋮ Spectral projected gradient method for stochastic optimization ⋮ Optimization Methods for Large-Scale Machine Learning ⋮ Variable sample size method for equality constrained optimization problems ⋮ Minimizing finite sums with the stochastic average gradient ⋮ Unnamed Item ⋮ Robust inversion, dimensionality reduction, and randomized sampling ⋮ Total Variation Regularization Strategies in Full-Waveform Inversion ⋮ Inexact Restoration approach for minimization with inexact evaluation of the objective function ⋮ Modeling and parameter identification for a nonlinear multi-stage system for \textit{dha regulon} in batch culture ⋮ Penalty variable sample size method for solving optimization problems with equality constraints in a form of mathematical expectation ⋮ Generalized row-action methods for tomographic imaging ⋮ A Stochastic Line Search Method with Expected Complexity Analysis ⋮ A robust multi-batch L-BFGS method for machine learning ⋮ An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces ⋮ Convergence Analysis of Inexact Randomized Iterative Methods ⋮ Nonmonotone line search methods with variable sample size ⋮ Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization ⋮ Newton-like Method with Diagonal Correction for Distributed Optimization
Uses Software
This page was built for publication: Hybrid Deterministic-Stochastic Methods for Data Fitting