日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

会議論文

Adapting Preshaped Grasping Movements Using Vision Descriptors

MPS-Authors
/persons/resource/persons84027

Kroemer,  O
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons227658

Detry,  R
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84139

Piater,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84135

Peters,  J
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Kroemer, O., Detry, R., Piater, J., & Peters, J. (2010). Adapting Preshaped Grasping Movements Using Vision Descriptors. From Animals to Animats 11, 156-166.


引用: https://hdl.handle.net/11858/00-001M-0000-0013-BEB2-2
要旨
Grasping is one of the most important abilities needed for future service robots. In the task of picking up an object from between clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which are often not available. Therefore, methods are needed that execute grasps robustly even with imprecise information gathered only from standard stereo vision. We propose techniques that reactively modify the robotamp;lsquo;s learned motor primitives based on non-parametric potential fields centered on the Early Cognitive Vision descriptors. These allow both obstacle avoidance, and the adapting of finger motions to the objectamp;lsquo;s local geometry. The methods were tested on a real robot, where they led to improved adaptability and quality of grasping actions.