Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Learning to express "left-right" & "front-behind" in a sign versus spoken language

MPG-Autoren
/persons/resource/persons32612

Sumer,  Beyza
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
International Max Planck Research School for Language Sciences, MPI for Psycholinguistics, Max Planck Society, Nijmegen, NL;
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons220

Zwitserlood,  Inge
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons142

Ozyurek,  Asli
Language in our Hands: Sign and Gesture, MPI for Psycholinguistics, Max Planck Society;
Research Associates, MPI for Psycholinguistics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

Sumer_etal_2014_CogSci.pdf
(Verlagsversion), 399KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0019-88D2-1
Zusammenfassung
Developmental studies show that it takes longer for children learning spoken languages to acquire viewpointdependent spatial relations (e.g., left-right, front-behind), compared to ones that are not viewpoint-dependent (e.g., in, on, under). The current study investigates how children learn to express viewpoint-dependent relations in a sign language where depicted spatial relations can be communicated in an analogue manner in the space in front of the body or by using body-anchored signs (e.g., tapping the right and left hand/arm to mean left and right). Our results indicate that the visual-spatial modality might have a facilitating effect on learning to express these spatial relations (especially in encoding of left-right) in a sign language (i.e., Turkish Sign Language) compared to a spoken language (i.e., Turkish).