English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Talk

Analysis amd Synthesis of facial Expressions using Computer Graphivs Animation and Psychophysics

MPS-Authors
/persons/resource/persons84115

Nusseck,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Nusseck, M. (2004). Analysis amd Synthesis of facial Expressions using Computer Graphivs Animation and Psychophysics. Talk presented at 5. Neurowissenschaftliche Nachwuchskonferenz Tübingen (NeNa '04). Oberjoch, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D879-5
Abstract
The human face is one of the most ecologically relevant objects for visual perception. Although the face changes expressions constantly and in a variety of complex ways, we are able to interpret these with a quick glance at a face. In particular, facial motion plays a complex and important role in communication. It can be used, for example, to convey meaning, to express an emotion or to modify the meaning of what is said. My research is focused on what we can learn, using psychophysical methodologies, about the human visual system from the way faces move. I will attempt to develop a detailed cognitive model for the perception of expressions by exploring and differentiating the information channels contained in facial expressions. Here, I present the results of psychophysical experiments, in which we manipulated real video sequences of facial expressions of different actors. In the first experiment, we scaled down the video sequences to find out how the recognition of an expression depends on the presented image size [2]. In a second set of experiments, Cunningham et al. selectively ’froze’ portions of a face to produce an initial, systematic description of the parts of a face that are necessary and sufficient for the recognition of facial expressions [3]. Based on these experiments, I will outline future work in which we plan to use computer animated faces [1]. This will allow us to produce realistic image sequences while retaining complete control over what occurs in the images (e.g., to finely alter the temporal parameters such as the speed, acceleration, duration, or synchronization of facial motion). Finally, I want to propose a unifying framework of interpretation and manipulation of facial analysis and synthesis, which contains different, hierarchically organized levels of perception and simulation. Within this framework, we can systematically identify and analyze the information channels that are addressed by the cognitive experiments described above. The results from this line of research are expected not only to shed light on perceptual mechanisms of expression recognition, but also to help improve computer animation in order to create perceptually consistent, realistic and believable conversational agents.