ausblenden:
Schlagwörter:
-
Zusammenfassung:
We investigated whether achieving object constancy across depth rotation was similar for visual (V) versus haptic (H) inputs by testing unimodal (VV, HH) and crossmodal (VH, HV) sequential object matching. We presented 60 white, hand-sized, plastic object models comprising 20 pairs from similarly-shaped categories (bath/sink; pig/dog, key/sword) and a midway morph (eg, half-bath/half-sink) between each pair. These objects were placed at fixed orientations behind an LCD screen that was opaque for haptic inputs and clear for visual presentations. A 90° rotation from the first to the second object on a trial impaired people‘s ability to detect shape changes in all conditions except HV matching. Task difficulty was varied between groups by manipulating shape dissimilarity on mismatch trials. For VV matches only, task-irrelevant rotations disrupted performance more when the task was harder. Viewpoint thus influenced both visual and haptic object identification but the effects of depth rotations differed across modalities and for unimodal versus crossmodal matching. These results suggest that most view change effects are due to modality-specific processes.