de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Datenschutzhinweis Impressum Kontakt
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Cross modal transfer in face recognition

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83892

Dopjans,  L
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84298

Wallraven,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Dopjans, L., Wallraven, C., & Bülthoff, H. (2007). Cross modal transfer in face recognition. Poster presented at 10th Tübinger Wahrnehmungskonferenz (TWK 2007), Tübingen, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-CCD3-9
Zusammenfassung
Prior studies have shown that humans can recognize faces by touch alone but perform poorly in cross-modal face recognition [1]. Here we want to shed further light on haptic face recognition with four experiments using a well-defined stimulus face space based on the morphable MPIFace- Database. Experiment 1 used a same/different task with sequentially presented faces which established that subjects were able to discriminate faces haptically, using short term memory. In Experiment 2 we used an old/new recognition task for which different sets of three faces (out of six) were learned haptically with three subsequent haptic test-blocks and one visual test-block. In contrast to Casey and Newell (2007) we used the same printed face masks for recognition in both modalities. We found that participants could recognize faces haptically although recognition accuracy was low (65) and tended to decrease across blocks. Cross-modal recognition, however, was at chance level (48). In Experiment 3, we changed the design such that haptic memory was refreshed before each test-block by repeated exposure to the three learned faces. We found that performance increased significantly to 76 and that it became more consistent across blocks. Most importantly, however, we found clear evidence for cross-modal transfer as visual performance rose above chance level (62). Our results demonstrate that during visual face recognition, participants have access to information learned during haptic exploration allowing them to perhaps form a visual image from haptic information. In Experiment 4, we interchanged learning and recognition modality with respect to Experiments 2+3, testing within-modality recognition in the visual domain and cross-modal transfer by haptic recognition of the face masks. Using the same experimental design as in Experiment 2, we found that performance in the visual within-modality condition increased significantly to 89 and that it became more consistent across blocks (71 compared to 39 for Experiment 2). However, recognition accuracy decreased across blocks (from 96 to 87). Interestingly, cross-modal performance was significantly higher than in Experiments 2 (at 69) demonstrating a clear advantage in cross-modal transfer for vision as the learning modality. The reasons for the observed differences in cross-modal transfer remain to be investigated. Possible factors include differences in visual versus haptic memory permanence, vision as the dominant and therefore preferred learning modality, and finally the role of visual imagery in cross-modal transfer.