Probabilistic Line Searches for Stochastic Optimization
From MaRDI portal
Publication:4637044
zbMath1441.90110arXiv1502.02846MaRDI QIDQ4637044
Maren Mahsereci, Philipp Hennig
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1502.02846
Related Items (15)
Accelerating mini-batch SARAH by step size rules ⋮ An empirical study into finding optima in stochastic optimization of neural networks ⋮ Stochastic quasi-Newton with line-search regularisation ⋮ Improved variance reduction extragradient method with line search for stochastic variational inequalities ⋮ An overview of stochastic quasi-Newton methods for large-scale machine learning ⋮ Adaptive step size rules for stochastic optimization in large-scale learning ⋮ Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions ⋮ Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities ⋮ Backtracking gradient descent method and some applications in large scale optimisation. II: Algorithms and experiments ⋮ Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches ⋮ Data-driven algorithm selection and tuning in optimization and signal processing ⋮ A Stochastic Line Search Method with Expected Complexity Analysis ⋮ A modern retrospective on probabilistic numerics ⋮ Probabilistic Line Searches for Stochastic Optimization ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming
- Efficient global optimization of expensive black-box functions
- Minimization of functions having Lipschitz continuous first partial derivatives
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Numerical Optimization
- Probabilistic Line Searches for Stochastic Optimization
- Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
- Function minimization by conjugate gradients
- A New Method of Solving Nonlinear Simultaneous Equations
- Convergence Conditions for Ascent Methods
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- A Stochastic Approximation Method
This page was built for publication: Probabilistic Line Searches for Stochastic Optimization