English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

MPS-Authors
/persons/resource/persons67

Gullberg,  Marianne
Language Acquisition Group, MPI for Psycholinguistics, Max Planck Society;
The Dynamics of Multilingual Processing, MPI for Psycholinguistics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

what speakers do and what.pdf
(Publisher version), 3MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Gullberg, M., & Holmqvist, K. (2006). What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video. Pragmatics & Cognition, 14(1), 53-82.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-18D6-E
Abstract
This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.