Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics

From MaRDI portal
(Redirected from Dataset:6725234)



DOI10.5281/zenodo.10636983Zenodo10636983MaRDI QIDQ6725234FDOQ6725234

Dataset published at Zenodo repository.

MacKenzie Weygandt Mathis, Mohammed Abdal Monium Osman, Sofia Makowska, Winthrop F. Gillis, Libby Zhang, Caleb Weinreb, Scott W. Linderman, Eli Conlin, Sherry Lin, Shaokai Ye, Jonah E. Pearl, Maya Jay, Sidharth Annapragada, Red Hoffman, Alexander Mathis, Talmo D Pereira, Sandeep Robert Datta

Publication date: 8 February 2024

Copyright license: Creative Commons Attribution 4.0 International



Raw data for the paper Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics. open_field_2D.zip 2D keypoints from open field recordings, used in Fig 1, Fig 2, Fig 3a-g. The data is formatted as if it were the output of DeepLabCut so that it can be used with keypoint-MoSeq tutorial. open_field_3D.h5 3D keypoints from open field recordings, used in Fig 5g-l. The data is formatted as an h5 file with one dataset per recording. Each dataset is an array with shape (n_frames, n_keypoints, 3). accelerometry_and_keypoints.h5 2D keypoints and intertial measurement unit (UMI) readings, used in Fig 3h-i. The keypoints and IMU data can be aligned using their respective timestamps. dopamine_and_keypoints.h5 2D keypoints and striatal dopamine signals (measured using dLight), used in Fig 4. The dopamine signal is already synced to the keypoints.







This page was built for dataset: Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics