Variational latent Gaussian process for recovering single-trial dynamics from population spike trains
From MaRDI portal
Publication:5380702
DOI10.1162/NECO_A_00953zbMATH Open1414.92130DBLPjournals/neco/ZhaoP17arXiv1604.03053WikidataQ38883484 ScholiaQ38883484MaRDI QIDQ5380702FDOQ5380702
Authors: Yuan Zhao, Il Memming Park
Publication date: 6 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Abstract: When governed by underlying low-dimensional dynamics, the interdependence of simultaneously recorded population of neurons can be explained by a small number of shared factors, or a low-dimensional trajectory. Recovering these latent trajectories, particularly from single-trial population recordings, may help us understand the dynamics that drive neural computation. However, due to the biophysical constraints and noise in the spike trains, inferring trajectories from data is a challenging statistical problem in general. Here, we propose a practical and efficient inference method, called the variational latent Gaussian process (vLGP). The vLGP combines a generative model with a history-dependent point process observation together with a smoothness prior on the latent trajectories. The vLGP improves upon earlier methods for recovering latent trajectories, which assume either observation models inappropriate for point processes or linear dynamics. We compare and validate vLGP on both simulated datasets and population recordings from the primary visual cortex. In the V1 dataset, we find that vLGP achieves substantially higher performance than previous methods for predicting omitted spike trains, as well as capturing both the toroidal topology of visual stimuli space, and the noise-correlation. These results show that vLGP is a robust method with a potential to reveal hidden neural dynamics from large-scale neural recordings.
Full work available at URL: https://arxiv.org/abs/1604.03053
Recommendations
- Direct discriminative decoder models for analysis of high-dimensional dynamical neural data
- Inference of multiplicative factors underlying neural variability in calcium imaging data
- Autoregressive Point Processes as Latent State-Space Models: A Moment-Closure Approach to Fluctuations and Autocorrelations
- The population tracking model: a simple, scalable statistical model for neural population data
- Discovery of salient low-dimensional dynamical structure in neuronal population activity using Hopfield networks
Applications of statistics to biology and medical sciences; meta analysis (62P10) Neural biology (92C20)
Cites Work
- Gaussian processes for machine learning.
- 10.1162/153244303768966085
- An introduction to the theory of point processes
- The Variational Gaussian Approximation Revisited
- A new look at state-space models for neural data
- Approximate methods for state-space models
- Title not available (Why is that?)
- Extracting low-dimensional latent structure from time series in the presence of delays
Cited In (7)
- Decoding of neural data using cohomological feature extraction
- Direct discriminative decoder models for analysis of high-dimensional dynamical neural data
- Title not available (Why is that?)
- Inference of multiplicative factors underlying neural variability in calcium imaging data
- Autoregressive Point Processes as Latent State-Space Models: A Moment-Closure Approach to Fluctuations and Autocorrelations
- Dethroning the Fano factor: a flexible, model-based approach to partitioning neural variability
- Extracting low-dimensional latent structure from time series in the presence of delays
This page was built for publication: Variational latent Gaussian process for recovering single-trial dynamics from population spike trains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380702)