Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Neural Mapping and Parallel Optical Flow Computation for Autonomous Navigation

MPG-Autoren
Es sind keine MPG-Autoren in der Publikation vorhanden
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bülthoff, H., Little, J., & Mallot, H. (1988). Neural Mapping and Parallel Optical Flow Computation for Autonomous Navigation. Poster presented at International Neural Network Society First Annual Meeting (INNS 1988), Boston, MA, USA.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-EF3F-5
Zusammenfassung
In this paper, the authors present information processing strategies, derived from neurobiology, which facilitate the evaluation of optical flow data considerably. In most previous approaches, the extraction of motion data from varying image intensities is complicated by the so-called aperture and correspondence problems. The correspondence problem arises if motion detection is based on image features that have to be identified in subsequent frames. If this problem is avoided by continuously registering image intensity changes not necessarily corresponding to features, the motion signal obtained becomes ambiguous due to the aperture problem. Recently a new algorithm for the computation of optical flow has been developed that produces dense motion data which are not subject to the aperture problem. Once the velocity vector field is established, optical flow analysis has to deal with the global space-variance of this field which carries much of the information. Local detectors for divergence (looming) and curl, that can be used in tasks such as obstacle avoidance, produce space-variant results even in the absence of obstacles. Also, motion detection itself could be restricted to just one direction per site for certain information processing tasks, were it not for the space-variance of that direction. For observer motion on a planar surface, these problems can be overcome by a retinotopic mapping, or transform, applied to image coordinates which inverts the perspective for points on this surface.