Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Markerless tracking of user-defined anatomical features with deep learning

MPG-Autoren
/persons/resource/persons83805

Bethge,  M
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen

Link
(Zusammenfassung)

Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Mathis, M., Mathis, A., Mamidanna, P., Abe, T., Murty, V., & Bethge, M. (2018). Markerless tracking of user-defined anatomical features with deep learning. Poster presented at 28th Annual Meeting of the Society for the Neural Control of Movement (NCM 2018), Santa Fe, NM, USA.


Zitierlink: https://hdl.handle.net/21.11116/0000-0001-7DF0-4
Zusammenfassung
Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods to observe animals, yet extracting particular aspects of a behavior can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, yet markers are intrusive, especially for smaller animals, and the number and location of the markers must be determined a priori. Here we provide a highly efficient method of markerless tracking in mice based on transfer learning with very few training samples (~ 200 frames). We demonstrate the versatility of this framework by tracking various body parts of mice in different tasks: odor trail-tracking (by one or multiple mice simultaneously), and a skilled forelimb reach and pull task. For example, during the skilled reaching behavior, individual digit joints can be automatically tracked from the hand. Remarkably, even when a small number of frames are labeled, the algorithm achieves excellent tracking performance on test frames that is comparable to human accuracy.