de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Neural Mapping and Parallel Optical Flow Computation for Autonomous Navigation

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84072

Mallot,  HA
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bülthoff, H., Little, J., & Mallot, H. (1988). Neural Mapping and Parallel Optical Flow Computation for Autonomous Navigation.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-EF3F-5
Zusammenfassung
In this paper, the authors present information processing strategies, derived from neurobiology, which facilitate the evaluation of optical flow data considerably. In most previous approaches, the extraction of motion data from varying image intensities is complicated by the so-called aperture and correspondence problems. The correspondence problem arises if motion detection is based on image features that have to be identified in subsequent frames. If this problem is avoided by continuously registering image intensity changes not necessarily corresponding to features, the motion signal obtained becomes ambiguous due to the aperture problem. Recently a new algorithm for the computation of optical flow has been developed that produces dense motion data which are not subject to the aperture problem. Once the velocity vector field is established, optical flow analysis has to deal with the global space-variance of this field which carries much of the information. Local detectors for divergence (looming) and curl, that can be used in tasks such as obstacle avoidance, produce space-variant results even in the absence of obstacles. Also, motion detection itself could be restricted to just one direction per site for certain information processing tasks, were it not for the space-variance of that direction. For observer motion on a planar surface, these problems can be overcome by a retinotopic mapping, or transform, applied to image coordinates which inverts the perspective for points on this surface.