Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters

Volkova, E., Mohler, B., Linkenauger, S., Alexandrova, I., & Bülthoff, H. (2011). Contribution of Prosody in Audio-visual Integration to Emotional Perception of Virtual Characters. Poster presented at 12th International Multisensory Research Forum (IMRF 2011), Fukuoka, Japan.

Item is

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Volkova, E1, Autor           
Mohler, B1, Autor           
Linkenauger, S1, Autor           
Alexandrova, I1, Autor           
Bülthoff, HH1, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Recent technology provides us with realistic looking virtual characters. Motion capture and elaborate mathematical models supply data for natural looking, controllable facial and bodily animations. With the help of computational linguistics and artificial intelligence, we can automatically assign emotional categories to appropriate stretches of text for a simulation of those social scenarios where verbal communication is important. All this makes virtual characters a valuable tool for creation of versatile stimuli for research on the integration of emotion information from different modalities. We conducted an audio-visual experiment to investigate the differential contributions of emotional speech and facial expressions on emotion identification. We used recorded and synthesized speech as well as dynamic virtual faces, all enhanced for seven emotional categories. The participants were asked to recognize the prevalent emotion of paired faces and audio. Results showed that when the voice was recorded, the vocalized emotion influenced participants’ emotion identification more than the facial expression. However, when the voice was synthesized, facial expression influenced participants’ emotion identification more than vocalized emotion. Additionally, individuals did worse on a identifying either the facial expression or vocalized emotion when the voice was synthesized. Our experimental method can help to determine how to improve synthesized emotional speech.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2011-10
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: URI: http://imrf.mcmaster.ca/IMRF/ocs3/index.php/imrf/2011/paper/view/263
BibTex Citekey: VolkovaMLAB2011
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 12th International Multisensory Research Forum (IMRF 2011)
Veranstaltungsort: Fukuoka, Japan
Start-/Enddatum: -

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: