English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Towards a markerless and automatic analysis of kinematic features: a toolkit for gesture and movement research

MPS-Authors
/persons/resource/persons197919

Trujillo,  James P.
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons142

Ozyurek,  Asli
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Trujillo_etal_2018.pdf
(Publisher version), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Trujillo, J. P., Vaitonyte, J., Simanova, I., & Ozyurek, A. (2018). Towards a markerless and automatic analysis of kinematic features: a toolkit for gesture and movement research. Behavior Research Methods. Advance online publication. doi:10.3758/s13428-018-1086-8.


Cite as: https://hdl.handle.net/21.11116/0000-0001-9E8E-E
Abstract
Action, gesture, and sign represent unique aspects of human communication that use form and movement to convey meaning. Researchers typically use manual coding of video data to characterize naturalistic, meaningful movements at various levels of description, but the availability of markerless motion-tracking technology allows for quantification of the kinematic features of gestures or any meaningful human movement. We present a novel protocol for extracting a set of kinematic features from movements recorded with Microsoft Kinect. Our protocol captures spatial and temporal features, such as height, velocity, submovements/strokes, and holds. This approach is based on studies of communicative actions and gestures and attempts to capture features that are consistently implicated as important kinematic aspects of communication. We provide open-source code for the protocol, a description of how the features are calculated, a validation of these features as quantified by our protocol versus manual coders, and a discussion of how the protocol can be applied. The protocol effectively quantifies kinematic features that are important in the production (e.g., characterizing different contexts) as well as the comprehension (e.g., used by addressees to understand intent and semantics) of manual acts. The protocol can also be integrated with qualitative analysis, allowing fast and objective demarcation of movement units, providing accurate coding even of complex movements. This can be useful to clinicians, as well as to researchers studying multimodal communication or human–robot interactions. By making this protocol available, we hope to provide a tool that can be applied to understanding meaningful movement characteristics in human communication.