Notice: Unexpected clearActionName after getActionName already called in /var/www/html/w/includes/context/RequestContext.php on line 333
A Stochastic Quasi-Newton Method for Large-Scale Optimization - MaRDI portal

A Stochastic Quasi-Newton Method for Large-Scale Optimization

From MaRDI portal
(Redirected from Publication:6032747)
Publication:121136

DOI10.1137/140954362zbMath1382.65166arXiv1401.7020OpenAlexW2963941964MaRDI QIDQ121136

J. Nocedal, S. L. Hansen, Y. Singer, R. H. Byrd

Publication date: 27 January 2014

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1401.7020




Related Items (80)

A decoupling approach for time-dependent robust optimization with application to power semiconductor devicesManaging randomization in the multi-block alternating direction method of multipliers for quadratic optimizationBayesian sparse learning with preconditioned stochastic gradient MCMC and its applicationsAn adaptive Hessian approximated stochastic gradient MCMC methodA fully stochastic second-order trust region methodQuasi-Newton methods for machine learning: forget the past, just sampleLimited-memory BFGS with displacement aggregationA stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationA Stochastic Second-Order Generalized Estimating Equations Approach for Estimating Association ParametersQNG: A Quasi-Natural Gradient Method for Large-Scale Statistical LearningA new robust class of skew elliptical distributionsSketch-based empirical natural gradient methods for deep learningA Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex OptimizationA New Likelihood Ratio Method for Training Artificial Neural NetworksTwo-dimensional distribution of streamwise velocity in open channel flow using maximum entropy principle: incorporation of additional constraints based on conservation lawsslimTrain---A Stochastic Approximation Method for Training Separable Deep Neural NetworksTowards explicit superlinear convergence rate for SR1A framework for parallel second order incremental optimization algorithms for solving partially separable problemsEfficient learning rate adaptation based on hierarchical optimization approachAdaptive stochastic approximation algorithmA stochastic variance reduced gradient using Barzilai-Borwein techniques as second order informationAn overview of stochastic quasi-Newton methods for large-scale machine learningTwo-stage 2D-to-3d reconstruction of realistic microstructures: implementation and numerical validation by effective propertiesInexact restoration with subsampled trust-region methods for finite-sum minimizationOn Stochastic and Deterministic Quasi-Newton Methods for Nonstrongly Convex Optimization: Asymptotic Convergence and Rate AnalysisAdaptive step size rules for stochastic optimization in large-scale learningA modified stochastic quasi-Newton algorithm for summing functions problem in machine learningOn the complexity of a stochastic Levenberg-Marquardt methodThe regularized stochastic Nesterov's accelerated quasi-Newton method with applicationsOn pseudoinverse-free block maximum residual nonlinear Kaczmarz method for solving large-scale nonlinear system of equationsTrust-region algorithms for training responses: machine learning methods using indefinite Hessian approximationsRiemannian Natural Gradient MethodsA single timescale stochastic quasi-Newton method for stochastic optimizationNewton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic ConvergenceSecant penalized BFGS: a noise robust quasi-Newton method via penalizing the secant conditionOn the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versionsUse of projective coordinate descent in the Fekete problemConvergence of Inexact Forward--Backward Algorithms Using the Forward--Backward EnvelopeSpectral projected gradient method for stochastic optimizationBlock BFGS MethodsUnnamed ItemQuasi-Newton methods: superlinear convergence without line searches for self-concordant functionsQuasi-Newton smoothed functional algorithms for unconstrained and constrained simulation optimizationStochastic Quasi-Newton Methods for Nonconvex Stochastic OptimizationGlobal convergence of a modified two-parameter scaled BFGS method with Yuan-Wei-Lu line search for unconstrained optimizationA Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic ProgrammingUnnamed ItemIQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateAn information based approach to stochastic control problemsUnnamed ItemA variation of Broyden class methods using Householder adaptive transformsEfficient computation of derivatives for solving optimization problems in R and Python using SWIG-generated interfaces to ADOL-CStochastic proximal quasi-Newton methods for non-convex composite optimizationstochQNStochastic sub-sampled Newton method with variance reductionEvolutionary prediction of nonstationary event popularity dynamics of Weibo social network using time-series characteristicsAnalysis of the BFGS Method with ErrorsAn Efficient Stochastic Newton Algorithm for Parameter Estimation in Logistic RegressionsCombining stochastic adaptive cubic regularization with negative curvature for nonconvex optimizationA robust multi-batch L-BFGS method for machine learningSampled Tikhonov regularization for large linear inverse problemsGoing Off the Grid: Iterative Model Selection for Biclustered Matrix CompletionAn Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton AccelerationAn Inertial Newton Algorithm for Deep LearningA Stochastic Semismooth Newton Method for Nonsmooth Nonconvex OptimizationOn the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimizationGeneralized self-concordant functions: a recipe for Newton-type methodsSampled limited memory methods for massive linear inverse problemsKalman-Based Stochastic Gradient Method with Stop Condition and Insensitivity to ConditioningSABRINA: a stochastic subspace majorization-minimization algorithmUnnamed ItemUnnamed ItemUnnamed ItemA subsampling approach for Bayesian model selectionUnnamed ItemA globally convergent incremental Newton methodA Noise-Tolerant Quasi-Newton Algorithm for Unconstrained OptimizationA hybrid stochastic optimization framework for composite nonconvex optimizationLSOS: Line-search second-order stochastic optimization methods for nonconvex finite sumsNewton-like Method with Diagonal Correction for Distributed Optimization




Cites Work




This page was built for publication: A Stochastic Quasi-Newton Method for Large-Scale Optimization