On the discrepancy principle for stochastic gradient descent
From MaRDI portal
Publication:5123704
DOI10.1088/1361-6420/abaa58OpenAlexW3046122974MaRDI QIDQ5123704
Publication date: 29 September 2020
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2004.14625
Nonparametric inference (62Gxx) General theory of linear operators (47Axx) Miscellaneous applications of operator theory (47Nxx)
Related Items (5)
Stochastic asymptotical regularization for linear inverse problems ⋮ Stochastic mirror descent method for linear ill-posed problems in Banach spaces ⋮ Stochastic linear regularization methods: random discrepancy principle and applications ⋮ On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces ⋮ An analysis of stochastic variance reduced gradient for linear inverse problems *
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution
- Iterative regularization methods for nonlinear ill-posed problems
- Regularization tools version \(4.0\) for matlab \(7.3\)
- Inverse Problems
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Optimization Methods for Large-Scale Machine Learning
- On the regularizing property of stochastic gradient descent
- On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems
- An Iteration Formula for Fredholm Integral Equations of the First Kind
- A Stochastic Approximation Method
This page was built for publication: On the discrepancy principle for stochastic gradient descent