de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons86

Jesse,  Alexandra
Language Comprehension Department, MPI for Psycholinguistics, Max Planck Society;
Department of Psychology, University of Massachusetts, Amherst, MA;
Mechanisms and Representations in Comprehending Speech, MPI for Psycholinguistics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.


Cite as: http://hdl.handle.net/11858/00-001M-0000-000F-A33B-A
Abstract
Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.