de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

SceneGen: Automated 3D Scene Generation for Psychophysical Experiments

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83918

Franz,  G
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons84287

von der Heyde,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Franz, G., von der Heyde, M., & Bülthoff, H. (2003). SceneGen: Automated 3D Scene Generation for Psychophysical Experiments. Poster presented at 6. Tübinger Wahrnehmungskonferenz (TWK 2003), Tübingen, Germany.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-DD1C-2
Zusammenfassung
For a systematic investigation of the perception of real spaces, photographs oer a chance to combine pictorial realism with laboratory experimental conditions. Psychophysical methods, however, often need a large variety of fully controlled stimuli, which is dicult to achieve with photographs of real scenes. Virtual scenes, on the other hand, provide the necessary exibility, but their generation by hand is usually too labor-intensive for larger quantities. Our SceneGen toolbox is capable to integrate the advantages of both in a fully automated process. SceneGen combines the good pictorial quality of photo textures, a physics-based radiosity lighting simulation (POVRay renderer), and the complete and convenient control of a high level feature-oriented XML-based description language. Thus, all scene features and rendering parameters are independently adjustable. External objects or scene parts can be integrated via a VRML interface. All this allows for an automated generation of an unlimited number of 3D multi-textured realtime-capable OpenGL models or panoramic images with exactly dened dierences. The applicability of the scenes as psychophysical stimuli is demonstrated by our current work on the in uence of view parameters on distance estimates and semantic dierential ratings in virtual reality. Nine subjects in two groups rated two sets of 20 precomputed rectangular interiors. The rooms diered in dimensions, proportions and the number and form of openings in similar ranges like real rooms, but had identical surface properties and illumination. The results show a signicant eect of the main experimental parameter eyepoint height on perceived egocentric distances as well as on allocentric distances perpendicular to gaze direction. Surprisingly, allocentric distance estimates parallel to gaze direction are not signicantly in uenced. This suggests that the participants' horizontal self-location is aected by the simulated eyepoint height. Our experimental paradigm allowed us to investigate spatial perception solely depending on pictorial cues under fully controlled but diverse and comparatively natural conditions. SceneGen is expected to be especially useful for the eld of empirical research touching the disciplines of architecture, virtual reality and perceptual psychophysics.