de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

How do we grasp (virtual) objects in three-dimensional space?

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons84238

Stockmeier,  K
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84990

Franz,  VH
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Stockmeier, K., Bülthoff, H., & Franz, V. (2003). How do we grasp (virtual) objects in three-dimensional space?. Poster presented at Third Annual Meeting of the Vision Sciences Society (VSS 2003), Sarasota, FL, USA.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-DB67-4
Zusammenfassung
Jeannerod (1981,1984) studied extensively the relationships between object size and grasping parameters, which has been influential for the interpretation of grasping data. The maximum grip aperture (MGA) scales linearly with object size, but the slope is less than 1 (app. 0.82, cf. Smeets Brenner 99). Here, we investigated if the location of the object in three-dimensional space influences the MGA. As well we addressed the question if the grasping of virtual objects shows the same characteristics as natural prehension. Virtual environments could enable experimenters to easily vary objects after the movement onset and therefore to explore the mechanisms of online control in visually guided movements. A virtual disc (36, 40, or 44 mm in diameter) was rendered using stereo computer graphics in 27 positions in different heights and locations relative to the observer. Virtual, haptic feedback was given using two robot arms (PHANToM TM). One robot arm was connected to the index finger, one to the thumb. Ten participants grasped the discs and transported them to a goal area, where they dropped the discs. The stereoscopically rendered discs were viewed through a mirror, such that the visual and haptic feedback matched. The position of the finger tips was measured using the two robot arms and an Optotrak (TM), in order to test for the accuracy of the PHANToM devices. The MGA was dependent on the distance of the object with respect to the observers body but not on the height of the disc. Participants scaled their MGA according to the size of the virtual disc, but with a slightly smaller slope (0.64+/−0.06) compared to natural environments. This could indicate that tactile feedback (in addition to haptic feedback) is needed to perform natural grasping movements.