de.mpg.escidoc.pubman.appbase.FacesBean
English
 
Help Guide Disclaimer Contact us Login
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

The achievement of object constancy across depth rotation for unimodal and crossmodal visual and haptic object recognition

MPS-Authors
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84940

Lawson,  R
Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Locator
There are no locators available
Fulltext (public)
There are no public fulltexts available
Supplementary Material (public)
There is no public supplementary material available
Citation

Lawson, R., & Bülthoff, H. (2008). The achievement of object constancy across depth rotation for unimodal and crossmodal visual and haptic object recognition. Poster presented at 31st European Conference on Visual Perception, Utrecht, Netherlands.


Cite as: http://hdl.handle.net/11858/00-001M-0000-0013-C7F7-0
Abstract
We investigated whether achieving object constancy across depth rotation was similar for visual (V) versus haptic (H) inputs by testing unimodal (VV, HH) and crossmodal (VH, HV) sequential object matching. We presented 60 white, hand-sized, plastic object models comprising 20 pairs from similarly-shaped categories (bath/sink; pig/dog, key/sword) and a midway morph (eg, half-bath/half-sink) between each pair. These objects were placed at fixed orientations behind an LCD screen that was opaque for haptic inputs and clear for visual presentations. A 90° rotation from the first to the second object on a trial impaired people‘s ability to detect shape changes in all conditions except HV matching. Task difficulty was varied between groups by manipulating shape dissimilarity on mismatch trials. For VV matches only, task-irrelevant rotations disrupted performance more when the task was harder. Viewpoint thus influenced both visual and haptic object identification but the effects of depth rotations differed across modalities and for unimodal versus crossmodal matching. These results suggest that most view change effects are due to modality-specific processes.