de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Poster

Does adding a visual task component affect fixation accuracy?

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons83811

Bieg,  H-J
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83861

Chuang,  LL
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons83839

Bülthoff,  HH
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bieg, H.-J., Chuang, L., & Bülthoff, H. (2010). Does adding a visual task component affect fixation accuracy?. Poster presented at 33rd European Conference on Visual Perception, Lausanne, Switzerland.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-0013-BED8-0
Zusammenfassung
Video-based eye-trackers are typically calibrated by instructing participants to fixate a series of dots, the physical locations of which are known to the system. Unfortunately, this procedure does not verify if fixation has actually occurred at the desired locations. This limitation can be remedied by requiring participants to perform a simple visual discrimination task at each location, thus mandating accurate fixation. Still, it remains an open question whether this modification could affect fixation accuracy. In the current study, we compared the accuracy of fixations that were performed with a visual discrimination task and those without such a requirement. Participants either identified the orientation of a small Landolt C (size = 0.1°) or fixated a similar probe without performing the task. Results indicate that participants fixated equally well in both tasks (mean diff. of abs. error = 0.01°, Bayes factor B01 = 4.0 with JZS prior, see [Rouder et al., 2009, Psychonomic Bulletin am p;am p;am p; R eview, 16(2), 225-237]). Given this, we propose the implementation of this visual discrimination task to eye-tracking calibration protocols as it elicits verifiable fixations without compromising fixation accuracy.