Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Moving objects in ultra-rapid visual categorisation result in better accuracy, but slower reaction times than static presentations

MPG-Autoren
/persons/resource/persons84291

Vuong,  QC
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84258

Thornton,  IM
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Kirchner, H., Vuong, Q., Thorpe, S., & Thornton, I. (2005). Moving objects in ultra-rapid visual categorisation result in better accuracy, but slower reaction times than static presentations. Poster presented at 8th Tübinger Wahrnehmungskonferenz (TWK 2005), Tübingen, Germany.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-D659-E
Zusammenfassung
Ultra-rapid categorisation studies have analysed human responses to briefly flashed, static natural scenes in order to determine the time needed to process different kinds of visual objects. Recently, Kirchner and Thorpe reported that reaction times can be extremely fast if subjects are asked to move their eyes to the side where an animal had appeared. Accuracy was remarkably good with the fastest reliable saccades occurring in only 130 ms after stimulus onset. Vuong and colleagues in a 2AFC task with apparent motion displays and manual responses further
showed that humans can be detected more easily than machines. In the present study we combined the two approaches in order to determine the processing speed of static vs. dynamic displays. In blocked conditions, human subjects were asked to detect either an animal or a
machine which in half of the trials were presented either static or in apparent motion. On each trial, an animal and a machine were presented simultaneously on the left and right of fixation, and the subjects were asked to make a saccade or to press a button at the target side. Manual
responses and saccadic eye movements both resulted in good accuracy, while reaction times to animals were significantly faster than to machines. Only saccadic eye movements showed a clear advantage of dynamic over static trials in accuracy, but the analysis of mean reaction
times pointed to a speed-accuracy trade-off. This might be explained by different response modes as seen in the latency distributions. We conclude that form processing can be improved by stimulus motion, but the speed of this process can be observed much more directly in eye
movement latencies as compared to manual responses.