English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Virtual storytelling of fairy tales: Towards simulation of emotional perception of text

MPS-Authors
/persons/resource/persons84285

Volkova,  EP
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83780

Alexandrova,  IV
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84088

Mohler,  BJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Volkova, E., Alexandrova, I., Bülthoff, H., & Mohler, B. (2010). Virtual storytelling of fairy tales: Towards simulation of emotional perception of text. Poster presented at 33rd European Conference on Visual Perception (ECVP 2010), Lausanne, Switzerland.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-BEF2-3
Abstract
Emotion analysis (EA) is a rapidly developing area in computational linguistics. For most EA systems, the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined. The question we address is whether the set of emotion categories can be enriched and whether the units to which the categories are assigned can be more flexibly defined. Six untrained participants annotated a corpus of eight texts having no predetermined annotation units and using fifteen emotional categories. The inter-annotator agreement rates were considerably high for this difficult task: 0.55 (moderate) on average, reaching 0.82 (almost perfect) with some annotator pairs. The final application of the intended EA system is predominantly in the emotion enhancement of human–computer interaction in virtual reality. The system is meant to be a bridge between unprocessed input text and auditory and visual information: generated speech, animation of facial expressions and body language. The first steps towards integrating text-based information annotated for emotion categories and simulation of human emotional perception of texts in story telling scenarios for virtual reality are already made. We have created a virtual character, whose animation of face and body is driven by annotations in text.