Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
DOI10.1007/S11590-022-01895-5OpenAlexW4287200304MaRDI QIDQ2693789FDOQ2693789
Authors: Michael C. Burkhart
Publication date: 24 March 2023
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.12949
Recommendations
- On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Parallel stochastic Newton method
- Exact and inexact subsampled Newton methods for optimization
- Minimizing finite sums with the stochastic average gradient
sequential Bayesian inferencediscriminative Bayesian filteringmomentum in optimizationstochastic Newton method
Convex programming (90C25) Inference from stochastic processes and prediction (62M20) Stochastic programming (90C15) Newton-type methods (49M15)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Title not available (Why is that?)
- Elements of Information Theory
- Title not available (Why is that?)
- Title not available (Why is that?)
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- On the use of stochastic Hessian information in optimization methods for machine learning
- 10.1162/jmlr.2003.3.4-5.993
- Minimization of functions having Lipschitz continuous first partial derivatives
- Adaptive stochastic approximation by the simultaneous perturbation method
- Incremental Least Squares Methods and the Extended Kalman Filter
- Incremental proximal methods for large scale convex optimization
- A simplified neuron model as a principal component analyzer
- Gaussian filters for nonlinear filtering problems
- Some methods of speeding up the convergence of iteration methods
- Title not available (Why is that?)
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- Large-scale machine learning with stochastic gradient descent
- Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering†
- A survey of truncated-Newton methods
- Stochastic global optimization as a filtering problem
- Bayesian filtering and smoothing
- Hybrid deterministic-stochastic methods for data fitting
- A stochastic line search method with expected complexity analysis
- Optimization methods for large-scale machine learning
- Title not available (Why is that?)
- Title not available (Why is that?)
- Adaptive sampling strategies for stochastic optimization
- Sub-sampled Newton methods
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Second-order stochastic optimization for machine learning in linear time
- An investigation of Newton-sketch and subsampled Newton methods
- Exact and inexact subsampled Newton methods for optimization
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Probabilistic line searches for stochastic optimization
- A Mean-Field Optimal Control Formulation for Global Optimization
- Title not available (Why is that?)
- The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models
- Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression
Cited In (1)
Uses Software
This page was built for publication: Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2693789)