Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
From MaRDI portal
Publication:2693789
Recommendations
- On the asymptotic rate of convergence of stochastic Newton algorithms and their weighted averaged versions
- Subsampled inexact Newton methods for minimizing large sums of convex functions
- Parallel stochastic Newton method
- Exact and inexact subsampled Newton methods for optimization
- Minimizing finite sums with the stochastic average gradient
Cites work
- scientific article; zbMATH DE number 6377992 (Why is no real title available?)
- scientific article; zbMATH DE number 47310 (Why is no real title available?)
- scientific article; zbMATH DE number 6860839 (Why is no real title available?)
- scientific article; zbMATH DE number 3017035 (Why is no real title available?)
- scientific article; zbMATH DE number 6125590 (Why is no real title available?)
- scientific article; zbMATH DE number 7306852 (Why is no real title available?)
- scientific article; zbMATH DE number 3024619 (Why is no real title available?)
- 10.1162/jmlr.2003.3.4-5.993
- A Mean-Field Optimal Control Formulation for Global Optimization
- A Stochastic Approximation Method
- A simplified neuron model as a principal component analyzer
- A stochastic line search method with expected complexity analysis
- A survey of truncated-Newton methods
- Acceleration of Stochastic Approximation by Averaging
- Adaptive sampling strategies for stochastic optimization
- Adaptive stochastic approximation by the simultaneous perturbation method
- Adaptive subgradient methods for online learning and stochastic optimization
- An investigation of Newton-sketch and subsampled Newton methods
- Bayesian filtering and smoothing
- Elements of Information Theory
- Exact and inexact subsampled Newton methods for optimization
- Gaussian filters for nonlinear filtering problems
- Hybrid deterministic-stochastic methods for data fitting
- Incremental Least Squares Methods and the Extended Kalman Filter
- Incremental proximal methods for large scale convex optimization
- Large-scale machine learning with stochastic gradient descent
- Minimization of functions having Lipschitz continuous first partial derivatives
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
- Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering†
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix
- On the use of stochastic Hessian information in optimization methods for machine learning
- Optimization methods for large-scale machine learning
- Probabilistic line searches for stochastic optimization
- Robust Closed-Loop Control of a Cursor in a Person with Tetraplegia using Gaussian Process Regression
- Second-order stochastic optimization for machine learning in linear time
- Some methods of speeding up the convergence of iteration methods
- Stochastic global optimization as a filtering problem
- Sub-sampled Newton methods
- The Discriminative Kalman Filter for Bayesian Filtering with Nonlinear and Nongaussian Observation Models
Cited in
(1)
This page was built for publication: Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2693789)