de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

A relative encoding model of spatiotemporal boundary formation

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83870

Cunningham,  DW
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83943

Graf,  ABA
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Cunningham, D., Graf, A., & Bülthoff, H. (2002). A relative encoding model of spatiotemporal boundary formation. Poster presented at 5. Tübinger Wahrnehmungskonferenz (TWK 2002), Tübingen, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-E02C-7
Zusammenfassung
When a camouflaged animal sits in front of the appropriate background, the animal is effectively invisible. As soon as the animal moves, however, it is easily visible despite the fact that at any given instant, there is no shape information. This process, referred to as Spatiotemporal Boundary Formation (SBF), can be initiated by a wide range of texture transformations, including changes in the visibility, shape, or color of individual texture elements. Shipley and colleagues have gathered a wealth of psychophysical data on SBF, and have presented a local motion vector model for the recovery of the orientation of local edge segments (LESs) from as few as three element changes (Shipley and Kellman, 1997). Here, we improve and extend this model to cover the extraction of global form and motion. The model recovers the orientation of the LESs from a dataset consisting of the relative spatiotemporal location of the element changes. The recovered orientations of as few as two LESs is then be used to extract the global motion, which is then used to determine the relative spatiotemporal location and minimal length of the LESs. To complete the global form, the LESs are connected in a manner similar to that used in illusory contours. Unlike Shipley and Kellman’s earlier model, which required that pairs of element changes be represented as local motion vectors, the present model merely encodes the relative spatiotemporal locations of the changes in any arbitrary coordinate system. Computational simulations of the model show that it captures the major psychophysical aspects of SBF, including a dependency on the spatiotemporal density of element changes and a sensitivity to spurious changes. Interestingly, the relative encoding scheme yields several emergent properties that are strikingly similar to the perception of aperture viewed figures (Anorthoscopic Perception). The model captures many of the important qaulities of SBF, and offers a framework within which additional aspects of SBF may be modelled. Moreover, the relative encoding approach seems to inherently encapsulate other phenomenon, offering the possibility of unifying several phenomena within a single mathematical model.