Hilfe Wegweiser Impressum Kontakt Einloggen





Extracting Affordance Cues from Observed Human-Object Interactions


Lies,  J-P
Research Group Computational Vision and Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar

Lies, J.-P. (2008). Extracting Affordance Cues from Observed Human-Object Interactions.

Autonomous, cognitive agents, e.g. mobile robots, have become more and more important through the last years. In order to obtain a high level of autonomy, an agent has to be able to interact with objects even though it might not have seen this kind of object before. Here, categorizing objects by its functionality seems to be the clue. In this thesis, we introduce briefly the idea of functional object category detection in the context of human-object interaction. We present a system which extracts visual shape descriptions (affordance cues) of object parts that are characteristic for a certain task based on observation of a prototypical interaction. The system is implemented and integrated into a cognitive agent framework which allows cooperation with e.g. manipulation systems. We show that the system has the applicability to extract affordance cues for different grasping techniques on different objects but also highlight restrictions of the system w.r.t. the scenery where the interaction is observed. In cooperation with other reseach groups, we were able use the detected affordance cues to detect objects in cluttered scenes and as input for manipulation.