Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Perceptual Display: Apparent Enhancement of Scene Detail and Depth

MPG-Autoren
/persons/resource/persons45095

Myszkowski,  Karol
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons202295

Tursun,  Okan Tarhan
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons79305

Kellnhofer,  Petr
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45598

Templin,  Krzysztof
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons135701

Arabadzhiyska,  Elena
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons44312

Didyk,  Piotr
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45449

Seidel,  Hans-Peter       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Myszkowski, K., Tursun, O. T., Kellnhofer, P., Templin, K., Arabadzhiyska, E., Didyk, P., et al. (2018). Perceptual Display: Apparent Enhancement of Scene Detail and Depth. In Human Vision and Electronic Imaging (pp. 1-10). Bellingham, WA: SPIE/IS&T. doi:10.2352/ISSN.2470-1173.2018.14.HVEI-501.


Zitierlink: https://hdl.handle.net/21.11116/0000-0001-5F64-5
Zusammenfassung
Predicting human visual perception of image differences has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e.g. we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extremes, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations.