Variational latent Gaussian process for recovering single-trial dynamics from population spike trains
From MaRDI portal
Publication:5380702
Abstract: When governed by underlying low-dimensional dynamics, the interdependence of simultaneously recorded population of neurons can be explained by a small number of shared factors, or a low-dimensional trajectory. Recovering these latent trajectories, particularly from single-trial population recordings, may help us understand the dynamics that drive neural computation. However, due to the biophysical constraints and noise in the spike trains, inferring trajectories from data is a challenging statistical problem in general. Here, we propose a practical and efficient inference method, called the variational latent Gaussian process (vLGP). The vLGP combines a generative model with a history-dependent point process observation together with a smoothness prior on the latent trajectories. The vLGP improves upon earlier methods for recovering latent trajectories, which assume either observation models inappropriate for point processes or linear dynamics. We compare and validate vLGP on both simulated datasets and population recordings from the primary visual cortex. In the V1 dataset, we find that vLGP achieves substantially higher performance than previous methods for predicting omitted spike trains, as well as capturing both the toroidal topology of visual stimuli space, and the noise-correlation. These results show that vLGP is a robust method with a potential to reveal hidden neural dynamics from large-scale neural recordings.
Recommendations
- Direct discriminative decoder models for analysis of high-dimensional dynamical neural data
- Inference of multiplicative factors underlying neural variability in calcium imaging data
- Autoregressive Point Processes as Latent State-Space Models: A Moment-Closure Approach to Fluctuations and Autocorrelations
- The population tracking model: a simple, scalable statistical model for neural population data
- Discovery of salient low-dimensional dynamical structure in neuronal population activity using Hopfield networks
Cites work
- scientific article; zbMATH DE number 7274999 (Why is no real title available?)
- 10.1162/153244303768966085
- A new look at state-space models for neural data
- An introduction to the theory of point processes
- Approximate methods for state-space models
- Extracting low-dimensional latent structure from time series in the presence of delays
- Gaussian processes for machine learning.
- The Variational Gaussian Approximation Revisited
Cited in
(7)- scientific article; zbMATH DE number 1843057 (Why is no real title available?)
- Autoregressive Point Processes as Latent State-Space Models: A Moment-Closure Approach to Fluctuations and Autocorrelations
- Extracting low-dimensional latent structure from time series in the presence of delays
- Decoding of neural data using cohomological feature extraction
- Inference of multiplicative factors underlying neural variability in calcium imaging data
- Dethroning the Fano factor: a flexible, model-based approach to partitioning neural variability
- Direct discriminative decoder models for analysis of high-dimensional dynamical neural data
This page was built for publication: Variational latent Gaussian process for recovering single-trial dynamics from population spike trains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380702)