English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

SceneGen: Automated 3D Scene Generation for Psychophysical Experiments

MPS-Authors
/persons/resource/persons83918

Franz,  G
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84287

von der Heyde,  M
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Franz, G., von der Heyde, M., & Bülthoff, H. (2003). SceneGen: Automated 3D Scene Generation for Psychophysical Experiments. Poster presented at 6. Tübinger Wahrnehmungskonferenz (TWK 2003), Tübingen, Germany.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-DD1C-2
Abstract
For a systematic investigation of the perception of real spaces, photographs oer a
chance to combine pictorial realism with laboratory experimental conditions. Psychophysical
methods, however, often need a large variety of fully controlled stimuli,
which is dicult to achieve with photographs of real scenes. Virtual scenes, on the
other hand, provide the necessary
exibility, but their generation by hand is usually
too labor-intensive for larger quantities. Our SceneGen toolbox is capable to integrate
the advantages of both in a fully automated process. SceneGen combines the good
pictorial quality of photo textures, a physics-based radiosity lighting simulation (POVRay
renderer), and the complete and convenient control of a high level feature-oriented
XML-based description language. Thus, all scene features and rendering parameters
are independently adjustable. External objects or scene parts can be integrated via a
VRML interface. All this allows for an automated generation of an unlimited number of
3D multi-textured realtime-capable OpenGL models or panoramic images with exactly
dened dierences. The applicability of the scenes as psychophysical stimuli is demonstrated
by our current work on the in
uence of view parameters on distance estimates
and semantic dierential ratings in virtual reality. Nine subjects in two groups rated
two sets of 20 precomputed rectangular interiors. The rooms diered in dimensions,
proportions and the number and form of openings in similar ranges like real rooms, but
had identical surface properties and illumination. The results show a signicant eect
of the main experimental parameter eyepoint height on perceived egocentric distances
as well as on allocentric distances perpendicular to gaze direction. Surprisingly, allocentric
distance estimates parallel to gaze direction are not signicantly in
uenced. This
suggests that the participants' horizontal self-location is aected by the simulated eyepoint
height. Our experimental paradigm allowed us to investigate spatial perception
solely depending on pictorial cues under fully controlled but diverse and comparatively
natural conditions. SceneGen is expected to be especially useful for the eld of empirical
research touching the disciplines of architecture, virtual reality and perceptual
psychophysics.