de.mpg.escidoc.pubman.appbase.FacesBean
Deutsch
 
Hilfe Wegweiser Impressum Kontakt Einloggen
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Hochschulschrift

Evaluation of Relevance Feedback Algorithms for XML Retrieval

MPG-Autoren
http://pubman.mpdl.mpg.de/cone/persons/resource/persons45520

Solomon,  Silvana
Databases and Information Systems, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45720

Weikum,  Gerhard
Databases and Information Systems, MPI for Informatics, Max Planck Society;

http://pubman.mpdl.mpg.de/cone/persons/resource/persons45380

Schenkel,  Ralf
Databases and Information Systems, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine Externen Ressourcen verfügbar
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Solomon, S. (2007). Evaluation of Relevance Feedback Algorithms for XML Retrieval. Master Thesis, Universität des Saarlandes, Saarbrücken.


Zitierlink: http://hdl.handle.net/11858/00-001M-0000-000F-1D94-C
Zusammenfassung
Information retrieval and feedback in {XML} are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in {XML IR}? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature, it is still not clear yet which of them are applicable in the context of {XML IR}, and which metrics they can be combined with to assess the quality of {XML} retrieval algorithms that use feedback. We propose a solution for fairly evaluating the performance of the {XML} search engines that use feedback for improving the query results. Compared to previous approaches, we aim at removing the effect of the results for which the system has knowledge about their the relevance, and at measuring the improvement on unseen relevant elements. We implemented our proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics. We performed multiple tests on runs from both {INEX} 2005 and {INEX} 2006, covering two different {XML} document collections. The performance of the assessed feedback algorithms did not reach the theoretical optimal values either for the proposed evaluation methodologies, or for the used metrics. The analysis of the results shows that, although the six evaluation techniques provide good improvement figures, none of them can be declared the absolute winner. Despite the lack of a definitive conclusion, our findings provide a better understanding on the quality of feedback algorithms.