Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution

Jesse, A., & Johnson, E. K. (2012). Prosodic temporal alignment of co-speech gestures to speech facilitates referent resolution. Journal of Experimental Psychology: Human Perception and Performance, 38, 1567-1581. doi:10.1037/a0027921.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
Jesse_JEP_Human_Perception_Performance_2012.pdf (Verlagsversion), 359KB
Name:
Jesse_JEP_Human_Perception_Performance_2012.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Jesse, Alexandra1, 2, 3, Autor           
Johnson, Elizabeth K. 4, Autor
Affiliations:
1Language Comprehension Department, MPI for Psycholinguistics, Max Planck Society, ou_792550              
2Department of Psychology, University of Massachusetts, Amherst, MA, ou_persistent22              
3Mechanisms and Representations in Comprehending Speech, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL, ou_55215              
4University of Toronto, Toronto, Canada, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2011201220122012
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: Expertenbegutachtung
 Identifikatoren: DOI: 10.1037/a0027921
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Journal of Experimental Psychology: Human Perception and Performance
Genre der Quelle: Zeitschrift
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: Washington : American Psychological Association (PsycARTICLES)
Seiten: - Band / Heft: 38 Artikelnummer: - Start- / Endseite: 1567 - 1581 Identifikator: ISSN: 0096-1523
CoNE: https://pure.mpg.de/cone/journals/resource/954927546243