de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Embodied Interaction in Immersive Virtual Environments with Real Time Self-animated Avatars

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83891

Dodds,  TJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84088

Mohler,  BJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83877

de la Rosa,  S
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84240

Streuber,  S
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Dodds, T., Mohler, B., de la Rosa, S., Streuber, S., & Bülthoff, H. (2011). Embodied Interaction in Immersive Virtual Environments with Real Time Self-animated Avatars. In Workshop Embodied Interaction: Theory and Practice in HCI (CHI 2011) (pp. 132-135). New York, NY, USA: ACM Press.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-BBC8-A
Zusammenfassung
This paper outlines our recent research that is providing users with a 3D avatar representation, and in particular focuses on studies in which the avatar is self-animated in real time. We use full body motion tracking, so when participants move their hands and feet, these movements are mapped onto the avatar. In a recent study (Dodds et al., CASA 2010), we found that a self-animated avatar aided participants in a communication task in a head-mounted display immersive virtual environment (VE). From the perspective of communication, we discovered it was not only important for the person speaking to be self-animated, but also for the person listening to us. Further, we show the potential of immersive VEs for investigating embodied interaction, and highlight possibilities for future research.