English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution

Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

Item is

Files

show Files
hide Files
:
Jesse_JEP_Human_Perception_Performance_2012.pdf (Publisher version), 359KB
Name:
Jesse_JEP_Human_Perception_Performance_2012.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Jesse, Alexandra1, 2, 3, Author           
Johnson, Elizabeth K. 4, Author
Affiliations:
1Language Comprehension Department, MPI for Psycholinguistics, Max Planck Society, ou_792550              
2Department of Psychology, University of Massachusetts, Amherst, MA, ou_persistent22              
3Mechanisms and Representations in Comprehending Speech, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL, ou_55215              
4University of Toronto, Toronto, Canada, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.

Details

show
hide
Language(s): eng - English
 Dates: 2011201220122012
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1037/a0027921
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Journal of Experimental Psychology: Human Perception and Performance
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Washington : American Psychological Association (PsycARTICLES)
Pages: - Volume / Issue: 38 Sequence Number: - Start / End Page: 1567 - 1581 Identifier: ISSN: 0096-1523
CoNE: https://pure.mpg.de/cone/journals/resource/954927546243