de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

The processing of disfluencies in native and non-native speech

MPS-Authors
There are no MPG-Authors available
Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Bosker, H. R., Quené, H., Sanders, T., & De Jong, N. H. (2013). The processing of disfluencies in native and non-native speech. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013). Marseille, France. 2013-09-02 - 2013-09-04.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0023-D794-C
Abstract
Speech comprehension involves extensive use of prediction – “determining what you yourself or your interlocutor is likely to say next” (Pickering & Garrod, in press, p.14). Predictions may be based on the semantics, syntax or phonology of the incoming speech signal. Arnold, Hudson Kam, & Tanenhaus (2007) have convincingly demonstrated that listeners may even base their predictions on the presence of disfluencies. When participants in an eye-tracking experiment heard a disfluent instruction containing a filled pause, they were more likely to fixate an unknown than a known object – a disfluency bias. This suggests that listeners very rapidly draw inferences about the speaker and the possible sources of disfluency. Our current goal is to study the contrast between native and non-native disfluencies in speech comprehension. Non-native speakers have additional reason to be disfluent since they are speaking in their L2. If listeners are aware that non-native disfluencies may have different cognitive origins (for instance low L2 proficiency), the disfluency bias – present in native speech comprehension – may be attenuated when listening to non-native speech. Two eye-tracking studies, using the Visual World Paradigm, were designed to study the processing of native and non-native disfluencies. We presented participants with pictures of either high-frequent (e.g., a hand) or low-frequent objects (e.g., a sewing machine). Pre-recorded instructions from a native or a non-native speaker told participants to click on one of two pictures while participants’ eye movements were recorded. Instructions were either fluent (e.g., “Click on the [target]”) or disfluent (e.g., “Click on ..uh.. the [target]”). When listeners heard disfluent instructions from a native speaker, anticipatory eye-movements towards low-frequent pictures were observed – a disfluency bias. In contrast, when listeners heard a non-native speaker produce the same utterances with a foreign accent, the disfluency bias was attenuated. We conclude that (a) listeners may use disfluencies to draw inferences about speaker difficulty in conceptualization and formulation of the target; and (b) speaker knowledge (hearing a foreign accent) may modulate these inferences - supposedly because of the strong correlation between non-native accent and disfluency.