A kernel-based calculation of information on a metric space
From MaRDI portal
Publication:280658
DOI10.3390/E15104540zbMATH Open1398.94078arXiv1405.4572OpenAlexW3101035600MaRDI QIDQ280658FDOQ280658
Authors: R. Joshua Tobin, Conor Houghton
Publication date: 10 May 2016
Published in: Entropy (Search for Journal in Brave)
Abstract: Kernel density estimation is a technique for approximating probability distributions. Here, it is applied to the calculation of mutual information on a metric space. This is motivated by the problem in neuroscience of calculating the mutual information between stimuli and spiking responses; the space of these responses is a metric space. It is shown that kernel density estimation on a metric space resembles the k-nearest-neighbor approach. This approach is applied to a toy dataset designed to mimic electrophysiological data.
Full work available at URL: https://arxiv.org/abs/1405.4572
Recommendations
Cites Work
- Title not available (Why is that?)
- Remarks on Some Nonparametric Estimates of a Density Function
- On Estimation of a Probability Density Function and Mode
- A novel spike distance
- Estimation of the information by an adaptive partitioning of the observation space
- Large sample optimality of least squares cross-validation in density estimation
- Quantifying Neurotransmission Reliability Through Metrics-Based Information Analysis
- A New Multineuron Spike Train Metric
- Title not available (Why is that?)
- Estimation of Entropy and Mutual Information
- Tight Data-Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques
Cited In (4)
This page was built for publication: A kernel-based calculation of information on a metric space
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q280658)