English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions

MPS-Authors
/persons/resource/persons142

Ozyurek,  Asli
Research Associates, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Drijvers_Ozyurek_2018.pdf
(Publisher version), 754KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.


Cite as: https://hdl.handle.net/21.11116/0000-0000-320C-B
Abstract
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.