English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Thesis

Evaluation of Relevance Feedback Algorithms for XML Retrieval

MPS-Authors
/persons/resource/persons45520

Solomon,  Silvana
Databases and Information Systems, MPI for Informatics, Max Planck Society;
International Max Planck Research School, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Solomon, S. (2007). Evaluation of Relevance Feedback Algorithms for XML Retrieval. Master Thesis, Universität des Saarlandes, Saarbrücken.


Cite as: https://hdl.handle.net/11858/00-001M-0000-000F-1D94-C
Abstract
Information retrieval and feedback in {XML} are rather new fields for researchers; natural questions arise, such as: how good are the feedback algorithms in {XML IR}? Can they be evaluated with standard evaluation tools? Even though some evaluation methods have been proposed in the literature, it is still not clear yet which of them are applicable in the context of {XML IR}, and which metrics they can be combined with to assess the quality of {XML} retrieval algorithms that use feedback. We propose a solution for fairly evaluating the performance of the {XML} search engines that use feedback for improving the query results. Compared to previous approaches, we aim at removing the effect of the results for which the system has knowledge about their the relevance, and at measuring the improvement on unseen relevant elements. We implemented our proposed evaluation methodologies by extending a standard evaluation tool with a module capable of assessing feedback algorithms for a specific set of metrics. We performed multiple tests on runs from both {INEX} 2005 and {INEX} 2006, covering two different {XML} document collections. The performance of the assessed feedback algorithms did not reach the theoretical optimal values either for the proposed evaluation methodologies, or for the used metrics. The analysis of the results shows that, although the six evaluation techniques provide good improvement figures, none of them can be declared the absolute winner. Despite the lack of a definitive conclusion, our findings provide a better understanding on the quality of feedback algorithms.