English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Meeting Abstract

Dynamic reweighting of facial form and motion cues during face recognition

MPS-Authors
/persons/resource/persons83890

Dobs,  K
Project group: Recognition & Categorization, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

Link
(Any fulltext)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dobs, K., & Reddy, L. (2016). Dynamic reweighting of facial form and motion cues during face recognition. Perception, 45(ECVP Abstract Supplement), 87-88.


Cite as: https://hdl.handle.net/21.11116/0000-0000-7C7F-8
Abstract
The integration of multiple sensory cues pertaining to the same object is essential for precise and accurate perception. An optimal strategy is to weight these cues proportional to their reliability. Moreover, as reliability of sensory information may rapidly change, the perceptual weight assigned to each cue must also change dynamically. Recent studies showed that human observers apply this principle when integrating low-level unisensory and multisensory signals, but evidence for highlevel perception remains scarce. Here we asked if human observers dynamically reweight high-level visual cues during face recognition. We therefore had subjects (n¼6) identify one of two previously learned synthetic facial identities using form and motion, and varied form reliability (i.e., by making faces ‘‘older’’) on a trial-to-trial level. For each subject, we fitted psychometric functions to the proportion of identity choices in each condition. As predicted by optimal cue integration, the empirical combined variance did not differ from the optimal combined variance (p>0.2, t-test). Importantly, the reduced form reliability (p<0.01) led to a reweighting of the form cue (p<0.01). Our data thus suggest that humans not only integrate but also dynamically reweight high-level visual cues, such as facial form and motion, to yield a coherent percept of a facial identity.