English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Where did I take that snapshot? Scene-based homing by image matching

MPS-Authors
/persons/resource/persons83919

Franz,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84072

Mallot,  HA
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Franz, M., Schölkopf, B., Mallot, H., & Bülthoff, H. (1998). Where did I take that snapshot? Scene-based homing by image matching. Biological Cybernetics, 79(3), 191-202. doi:10.1007/s004220050470.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-E7E1-F
Abstract
In homing tasks, the goal is often not marked by visible objects but must be inferred from the spatial relation to the visual cues in the
surrounding scene. The exact computation of the goal direction would require knowledge about the distances to visible landmarks, information, which
is not directly available to passive vision systems. However, if prior assumptions about typical distance distributions are used, a snapshot taken at the
goal suffices to compute the goal direction from the current view. We show that most existing approaches to scene-based homing implicitly assume
an isotropic landmark distribution. As an alternative, we propose a homing scheme that uses parameterized displacement fields. These are obtained
from an approximation that incorporates prior knowledge about perspective distortions of the visual environment. A mathematical analysis proves
that both approximations do not prevent the schemes from approaching the goal with arbitrary accuracy, but lead to different errors in the computed
goal direction. Mobile robot experiments are used to test the theoretical predictions and to demonstrate the practical feasibility of the new approach.