Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers

MPG-Autoren
/persons/resource/persons4512

Holler,  Judith
Language and Cognition Department, MPI for Psycholinguistics, Max Planck Society;
INTERACT, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons71746

Schubotz,  Louise
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;
Center for Language Studies , External Organizations;

/persons/resource/persons81089

Schuetze,  Manuela
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons69

Hagoort,  Peter
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour;

/persons/resource/persons142

Ozyurek,  Asli
Donders Institute for Brain, Cognition and Behaviour;
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Center for Language Studies , External Organizations;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

paper0463.pdf
(Verlagsversion), 740KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-000E-EDF6-F
Zusammenfassung
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye gaze, may influence this processing. We address this question by simulating a triadic communication context in which a speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message