de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Virtual Storyteller in Immersive Virtual Environments Using Fairy Tales Annotated for Emotion States

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83780

Alexandrova,  IV
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84285

Volkova,  EP
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Kloos U, Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84088

Mohler,  BJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Alexandrova, I., Volkova, E., Kloos U, Bülthoff, H., & Mohler, B. (2010). Virtual Storyteller in Immersive Virtual Environments Using Fairy Tales Annotated for Emotion States. Virtual Environments 2010: Joint Virtual Reality Conference of EGVE - 16th Eurographics Symposium on Virtual Environments, EuroVR - the 7th EuroVR (INTUITION) Conference, VEC - the annual Virtual Efficiency Congress, 65-68.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-BDF6-3
Abstract
This paper describes the implementation of an automatically generated virtual storyteller from fairy tale texts which were previously annotated for emotion. In order to gain insight into the effectiveness of our virtual storyteller we recorded face, body and voice of an amateur actor and created an actor animation video of one of the fairy tales. We also got the actor's annotation of the fairy tale text and used this to create a virtual storyteller video. With these two videos, the virtual storyteller and the actor animation, we conducted a user study to determine the effectiveness of our virtual storyteller at conveying the intended emotions of the actor. Encouragingly, participants performed best (when compared to the intended emotions of the actor) when they marked the emotions of the virtual storyteller. Interestingly, the actor himself was not able to annotate the animated actor video with high accuracy as compared to his annotated text. This argues that for future work we must have our actors also annotate their body and facial expressions, not just the text, in order to further investigate the effectiveness of our virtual storyteller. This research is a first step towards using our virtual storyteller in real-time immersive virtual environments.