English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Motor Representations in Visual Object Recognition

MPS-Authors
/persons/resource/persons83960

Helbig,  HB
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Graf, M., Helbig, H., & Kiefer, M. (2005). Motor Representations in Visual Object Recognition. Poster presented at 8th Tübinger Wahrnehmungskonferenz (TWK 2005), Tübingen, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-D657-1
Abstract
It has been proposed recently that object recognition relies on coordinate transformations, i.e. on similar processes as visuomotor control [1]. Thus, the two visual streams involved in object
recognition and object-directed action may rely on common computational principles, which
provides the possibility for interactions between the two streams. Existing behavioral and neurophysiological
findings suggest that viewing manipulable objects automatically potentiates
possible actions [e.g., 2,3]. We investigated whether action knowledge has a functional role
in visual object recognition. More specifically, we used a priming paradigm to test whether
objects are recognized better when viewed after another object which affords congruent as
compared to incongruent motor interactions. Two grey-scale pictures of artifactual manipulable
objects were presented sequentially (tools, kitchen utensils, musical instruments). Subjects
were required to name the objects. The stimuli were briefly presented and masked. The presentation
time of the second object was adjusted individually in an adjustment phase so that
naming accuracy approached 80. In the congruent condition both objects afforded a similar
motor interaction, and dissimilar motor interactions in the incongruent condition. Stimulus
pairs in both conditions were matched for baseline naming accuracy, word frequency, word
length, as well as visual and semantic similarity. We found that naming accuracy was higher
in the congruent than in the incongruent condition (Experiment 1 and 2). This action congruency
effect indicates that object naming is facilitated by a previous activation of an appropriate
action representation. In two further experiments we investigated the nature of the representations
underlying the action congruency effect. The effect was reduced or absent when the
prime stimulus was inverted (Experiment 3), and when the prime was presented as a word (Experiment
4). This suggests that the action representations underlying this congruency effect
are closer to specific (parameterised) motor representations than to abstract semantic representations.
Overall, the findings suggest that the recognition of manipulable objects involves not
only visual but also action representations. These are not abstract semantic representations, but
are relatively close to motor representations.