English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Analysis amd Synthesis of facial Expressions using Computer Graphivs Animation and Psychophysics

Nusseck, M. (2004). Analysis amd Synthesis of facial Expressions using Computer Graphivs Animation and Psychophysics. Talk presented at 5. Neurowissenschaftliche Nachwuchskonferenz Tübingen (NeNa '04). Oberjoch, Germany.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Nusseck, M1, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              

Content

show
hide
Free keywords: -
 Abstract: The human face is one of the most ecologically relevant objects for visual perception. Although the face changes expressions constantly and in a variety of complex ways, we are able to interpret these with a quick glance at a face. In particular, facial motion plays a complex and important role in communication. It can be used, for example, to convey meaning, to express an emotion or to modify the meaning of what is said. My research is focused on what we can learn, using psychophysical methodologies, about the human visual system from the way faces move. I will attempt to develop a detailed cognitive model for the perception of expressions by exploring and differentiating the information channels contained in facial expressions. Here, I present the results of psychophysical experiments, in which we manipulated real video sequences of facial expressions of different actors. In the first experiment, we scaled down the video sequences to find out how the recognition of an expression depends on the presented image size [2]. In a second set of experiments, Cunningham et al. selectively ’froze’ portions of a face to produce an initial, systematic description of the parts of a face that are necessary and sufficient for the recognition of facial expressions [3]. Based on these experiments, I will outline future work in which we plan to use computer animated faces [1]. This will allow us to produce realistic image sequences while retaining complete control over what occurs in the images (e.g., to finely alter the temporal parameters such as the speed, acceleration, duration, or synchronization of facial motion). Finally, I want to propose a unifying framework of interpretation and manipulation of facial analysis and synthesis, which contains different, hierarchically organized levels of perception and simulation. Within this framework, we can systematically identify and analyze the information channels that are addressed by the cognitive experiments described above. The results from this line of research are expected not only to shed light on perceptual mechanisms of expression recognition, but also to help improve computer animation in order to create perceptually consistent, realistic and believable conversational agents.

Details

show
hide
Language(s):
 Dates: 2004-08
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: URI: http://www.neuroschool-tuebingen-nena.de/index.php?id=284
BibTex Citekey: Nusseck2004
 Degree: -

Event

show
hide
Title: 5. Neurowissenschaftliche Nachwuchskonferenz Tübingen (NeNa '04)
Place of Event: Oberjoch, Germany
Start-/End Date: -

Legal Case

show

Project information

show

Source

show