English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Book Chapter

Extracting egomotion from optic flow: limits of accuracy and neural matched filters

MPS-Authors
There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dahmen, H.-J., Franz, M., & Krapp, H. (2001). Extracting egomotion from optic flow: limits of accuracy and neural matched filters. In J. Zanker, & J. Zeil (Eds.), Motion Vision: Computational, Neural, and Ecological Constraints (pp. 143-168). Berlin, Germany: Springer.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-E38A-8
Abstract
In this chapter we review two pieces of work aimed at understanding the principal limits of extracting egomotion parameters from optic flow fields (Dahmen et al. 1997) and the functional significance of the receptive field organization of motion sensitive neurones in the fly’s visual system (Franz and Krapp 1999). In the first study, we simulated noisy image flow as it is experienced by an observer moving through an environment of randomly distributed objects for different magnitudes and directions of simultaneous rotation R and translation T. Estimates R’ of the magnitude and direction of R and t’ of the direction of T were derived from samples of this perturbed image flow and were compared with the original vectors using an iterative procedure proposed by Koenderink and van Doom (1987). The sampling was restricted to one or two cone-shaped subregions of the visual field, which had variable angular size and viewing directions oriented either parallel or orthogonal with respect to the egomotion vectors R and T. We also investigated the influence of environmental structure, such as various depth distributions of objects and the role of planar or spherical surfaces. From our results we derive two general rules how to optimize egomotion estimates: (i) Errors are minimized by expanding the field of view. (ii) Sampling image motion from opposite directions improves the accuracy, particularly for small fields of view.