An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series
From MaRDI portal
Publication:4634133
DOI10.1080/01630563.2018.1549072zbMath1421.94024arXiv1711.04909OpenAlexW2962754062MaRDI QIDQ4634133
Publication date: 7 May 2019
Published in: Numerical Functional Analysis and Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.04909
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Optimal learning of bandlimited functions from localized sampling
- Non-uniform weighted average sampling and reconstruction in shift-invariant and wavelet spaces
- Discrete singular convolution for the sine-Gordon equation
- On the validity of ``A proof that the discrete singular convolution (DSC)/Lagrange-distributed approximation function (LDAF) method is inferior to high order finite differences by J. P. Boyd.
- Convergence Analysis of the Gaussian Regularized Shannon Sampling Series
- An Average Sampling Theorem for Bandlimited Stochastic Processes
- The Shannon sampling theorem—Its various extensions and applications: A tutorial review
- Reconstruction Algorithms in Irregular Sampling
- Comparison of the Discrete Singular Convolution and Three Other Numerical Schemes for Solving Fisher's Equation
- Generalizations of the sampling theorem: Seven decades after Nyquist
- Sampling-50 years after Shannon
- Reconstruction of band-limited signals from local averages
- On the regularized Whittaker-Kotel’nikov-Shannon sampling formula
- Exponential approximation of bandlimited random processes from oversampling
- Bounds for Truncation Error of the Sampling Expansion
This page was built for publication: An Optimal Convergence Rate for the Gaussian Regularized Shannon Sampling Series