English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  The processing of disfluencies in native and non-native speech

Bosker, H. R., Quené, H., Sanders, T., & De Jong, N. H. (2013). The processing of disfluencies in native and non-native speech. Talk presented at The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013). Marseille, France. 2013-09-02 - 2013-09-04.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Bosker, Hans R.1, Author           
Quené, Hugo1, Author
Sanders, Ted1, Author
De Jong, Nivja H.1, Author
Affiliations:
1Utrecht institute of Linguistics OTS, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Speech comprehension involves extensive use of prediction – “determining what you yourself or your interlocutor is likely to say next” (Pickering & Garrod, in press, p.14). Predictions may be based on the semantics, syntax or phonology of the incoming speech signal. Arnold, Hudson Kam, & Tanenhaus (2007) have convincingly demonstrated that listeners may even base their predictions on the presence of disfluencies. When participants in an eye-tracking experiment heard a disfluent instruction containing a filled pause, they were more likely to fixate an unknown than a known object – a disfluency bias. This suggests that listeners very rapidly draw inferences about the speaker and the possible sources of disfluency. Our current goal is to study the contrast between native and non-native disfluencies in speech comprehension. Non-native speakers have additional reason to be disfluent since they are speaking in their L2. If listeners are aware that non-native disfluencies may have different cognitive origins (for instance low L2 proficiency), the disfluency bias – present in native speech comprehension – may be attenuated when listening to non-native speech. Two eye-tracking studies, using the Visual World Paradigm, were designed to study the processing of native and non-native disfluencies. We presented participants with pictures of either high-frequent (e.g., a hand) or low-frequent objects (e.g., a sewing machine). Pre-recorded instructions from a native or a non-native speaker told participants to click on one of two pictures while participants’ eye movements were recorded. Instructions were either fluent (e.g., “Click on the [target]”) or disfluent (e.g., “Click on ..uh.. the [target]”). When listeners heard disfluent instructions from a native speaker, anticipatory eye-movements towards low-frequent pictures were observed – a disfluency bias. In contrast, when listeners heard a non-native speaker produce the same utterances with a foreign accent, the disfluency bias was attenuated. We conclude that (a) listeners may use disfluencies to draw inferences about speaker difficulty in conceptualization and formulation of the target; and (b) speaker knowledge (hearing a foreign accent) may modulate these inferences - supposedly because of the strong correlation between non-native accent and disfluency.

Details

show
hide
Language(s): eng - English
 Dates: 2013
 Publication Status: Not specified
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: The 19th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2013)
Place of Event: Marseille, France
Start-/End Date: 2013-09-02 - 2013-09-04

Legal Case

show

Project information

show

Source

show