de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Vortrag

Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84285

Volkova,  EP
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Volkova, E. (2010). Emotional Perception of Fairy Tales: Achieving Agreement in Emotion Annotation of Text. Talk presented at 20. Tagung der Computerlinguistik Studierenden (TaCoS 2010). Zürich, Switzerland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-C002-1
Zusammenfassung
Emotion analysis (EA) is a rapidly developing area in computational linguis- tics. An EA system can be extremely useful in fields such as information retrieval and emotion-driven computer animation. For most EA systems, the number of emotion classes is very limited and the text units the classes are assigned to are discrete and predefined. The question we address is whether the set of emotion categories can be enriched and whether the units to which the categories are as- signed can be more flexibly defined. Ten German native speakers voluntarily participated in the series of experiments we conducted to find answers to these issues. The participants were di- vided into two groups and each participant worked on five of the eight Grimms fairy tales; each text was 1200-1400 words long and written in Standard German. Fifteen emotions categories were used - seven positive: relief, joy, hope, interest, compassion, surprise, approval; seven negative: disturbance, sadness, despair, disgust, hatred, fear, anger; and neutral. The main task for the participants was to locate and mark stretches of text where an emotion was to be conveyed through the speech melody and/or facial expressions if the participant was to read the text out loud. Another task was to annotate the word lists for the fairy tale texts. Other assignments included cognitive task on emotion categories taken outside the fairy tales context, e.g. giving definitions to emotion categories or organizing the categories into clusters. The κ of inter-annotator agreement (IAA) scores were calculated using the emotion clusters information, for according to the results, participants would of- ten stably use different emotions from same clusters at the same stretch of text. Four out of ten participants, two from each group, had very low IAA scores, a high proportion of unmarked text, and they used few emotion categories, so for the evaluation part their data was discarded. The final IAA evaluation was calcu- lated on all the annotation pairs obtained from the six remaining participants. The total number of annotation pairs thus amounted to 48: two texts annotated by all the six annotators, six texts annotated by three annotators for each of the two annotation sets. The annotator agreement was moderate on average (0.53), and some pairs approached the almost perfect IAA rate (0.83). The IAA rates, calculated on the full set of fifteen emotions, without taking the emotion clusters into consideration, gave a moderate IAA rate on average (0.34) and reached substantial level (0.62) at maximum. We consider the resulting IAA rates to be high enough to accept the anno- tations as suitable for gold-standard corpus compilation in the frame of this re- search. As such, we view this work as the first step towards the development of a more complex EA system, which aims to simulate the actual human emotional perception of text. The final goal of our project is an EA system used for emotion enhancement of human-computer interaction in virtual or augmented reality. It is meant to be a bridge between unprocessed input text and visual and auditory information, coming from a virtual character in story telling scenarios.