First-order and stochastic optimization methods for machine learning

From MaRDI portal
Publication:2307433

DOI10.1007/978-3-030-39568-1zbMath1442.68003OpenAlexW3025638325MaRDI QIDQ2307433

Guanghui Lan

Publication date: 27 March 2020

Published in: Springer Series in the Data Sciences (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/978-3-030-39568-1




Related Items

Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured LearningSolving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type AlgorithmSpatiotemporal-textual point processes for crime linkage detectionA stochastic primal-dual method for a class of nonconvex constrained optimizationDifferentially Private Accelerated Optimization AlgorithmsSimple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian Noise and Policy Evaluation in Reinforcement LearningSimple and Optimal Methods for Stochastic Variational Inequalities, I: Operator ExtrapolationSubgradient ellipsoid method for nonsmooth convex problemsConvergence analysis of a subsampled Levenberg-Marquardt algorithmA dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problemsUniversal Conditional Gradient Sliding for Convex OptimizationAccelerated gradient methods with absolute and relative noise in the gradientFinite-time convergence rates of distributed local stochastic approximationA unified analysis of stochastic gradient‐free Frank–Wolfe methodsOptimistic optimisation of composite objective with exponentiated updateA distributed proximal gradient method with time-varying delays for solving additive convex optimizationsA unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problemsHyperfast second-order local solvers for efficient statistically preconditioned distributed optimizationAccelerated variance-reduced methods for saddle-point problemsOptimal Methods for Convex Risk-Averse Distributed OptimizationGraph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic OptimizationNo-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimizationRandomized Douglas–Rachford Methods for Linear Systems: Improved Accuracy and EfficiencyVariable sample-size operator extrapolation algorithm for stochastic mixed variational inequalitiesOptimal Algorithms for Stochastic Complementary Composite MinimizationDecentralized saddle-point problems with different constants of strong convexity and strong concavityStochastic regularized Newton methods for nonlinear equationsBlock mirror stochastic gradient method for stochastic optimizationPolicy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classesAccelerated doubly stochastic gradient descent for tensor CP decompositionHessian averaging in stochastic Newton methods achieves superlinear convergenceAccelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reductionFirst-order methods for convex optimizationStochastic variable metric proximal gradient with variance reduction for non-convex composite optimizationSample average approximations of strongly convex stochastic programs in Hilbert spacesStatistical Analysis of Fixed Mini-Batch Gradient Descent EstimatorAn oracle-based framework for robust combinatorial optimizationStochastic first-order methods for convex and nonconvex functional constrained optimizationSample Size Estimates for Risk-Neutral Semilinear PDE-Constrained OptimizationUnnamed ItemAccelerated methods for saddle-point problemRecent theoretical advances in decentralized distributed convex optimizationRecent Theoretical Advances in Non-Convex OptimizationFrank-Wolfe and friends: a journey into projection-free first-order optimization methodsConditional Gradient Methods for Convex Optimization with General Affine and Nonlinear ConstraintsSolving convex min-min problems with smoothness and strong convexity in one group of variables and low dimension in the otherA new restricted memory level bundle method for constrained convex nonsmooth optimizationDualize, split, randomize: toward fast nonsmooth optimization algorithmsStochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spacesDimension independent excess risk by stochastic gradient descentEfficient Algorithms for Distributionally Robust Stochastic Optimization with Discrete Scenario SupportLearning over No-Preferred and Preferred Sequence of Items for Robust RecommendationFinite-Time Analysis and Restarting Scheme for Linear Two-Time-Scale Stochastic ApproximationConstructing unbiased gradient estimators with finite variance for conditional stochastic optimizationFinite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learningDecentralized convex optimization under affine constraints for power systems controlNetwork manipulation algorithm based on inexact alternating minimizationDistributionally robust optimization with moment ambiguity setsInertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problemsComplexity of stochastic dual dynamic programming




This page was built for publication: First-order and stochastic optimization methods for machine learning