de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Can learning one grasp facilitate novel grasps?

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83906

Ernst,  MO
Research Group Multisensory Perception and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84273

van Veen,  H-J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Ernst, M., van Veen, H.-J., & Bülthoff, H. (1997). Can learning one grasp facilitate novel grasps?. Poster presented at 20th European Conference on Visual Perception, Helsinki, Finland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-E9F4-8
Zusammenfassung
We investigated whether knowledge acquired during repetitive grasping can be used to grasp a similar object differing in position or size. We conducted two experiments using a mirror to project a computer-generated image to the location of an object to be grasped. Subjects saw the image until initiation of the grasp but were unable to see either their hand or the real object. The training phase consisted of repetitive grasps to a single cube in a fixed position displaying a corresponding image. In the test phase we used the same cube in different positions but displayed only a small position-marker (experiment 1). In experiment 2 subjects grasped for differently sized cubes in the trained position. To indicate size changes we displayed appropriately sized cubes at a different location. In the subsequent control phase of each experiment subjects saw fully rendered cubes in appropriate positions and sizes instead of the position-marker or size cue. Performance in the test and control phase was similar for all measured grasp parameters, including maximum preshape aperture, maximum speed, and grasp duration. In experiment 2, in which the size of the cubes changed, variability in grasp duration (±110 ms vs ±40 ms) and maximum preshape aperture (±10 mm vs ±4 mm) was greater in the test phase than in the control phase, indicating increased uncertainty in grasping. Had subjects learned a single motor routine they would not have been able to grasp so well for objects differing in position or size. Together with our previous results (Ernst et al, 1997, paper presented at ARVO) these findings indicate that subjects can make use of stored representations of an object's position and size to produce an appropriate grasp under open-loop conditions.