de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Virtual storytelling of fairy tales: Towards simulation of emotional perception of text

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84285

Volkova,  EP
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83780

Alexandrova,  IV
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84088

Mohler,  BJ
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Volkova, E., Alexandrova, I., Bülthoff, H., & Mohler, B. (2010). Virtual storytelling of fairy tales: Towards simulation of emotional perception of text. Poster presented at 33rd European Conference on Visual Perception, Lausanne, Switzerland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-BEF2-3
Zusammenfassung
Emotion analysis (EA) is a rapidly developing area in computational linguistics. For most EA systems, the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined. The question we address is whether the set of emotion categories can be enriched and whether the units to which the categories are assigned can be more flexibly defined. Six untrained participants annotated a corpus of eight texts having no predetermined annotation units and using fifteen emotional categories. The inter-annotator agreement rates were considerably high for this difficult task: 0.55 (moderate) on average, reaching 0.82 (almost perfect) with some annotator pairs. The final application of the intended EA system is predominantly in the emotion enhancement of human–computer interaction in virtual reality. The system is meant to be a bridge between unprocessed input text and auditory and visual information: generated speech, animation of facial expressions and body language. The first steps towards integrating text-based information annotated for emotion categories and simulation of human emotional perception of texts in story telling scenarios for virtual reality are already made. We have created a virtual character, whose animation of face and body is driven by annotations in text.