Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  A Reproducible Benchmark for P2P Retrieval

Neumann, T., Bender, M., Michel, S., & Weikum, G. (2006). A Reproducible Benchmark for P2P Retrieval. In Proceedings of the 1st International Workshop on Performance and Evaluation of Data Management Systems, ExpDB 2006, in cooperation with ACM SIGMOD (pp. 1-8). New York, USA: ACM.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
NeumannBMW06.pdf (beliebiger Volltext), 160KB
 
Datei-Permalink:
-
Name:
NeumannBMW06.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Neumann, Thomas1, Autor           
Bender, Matthias1, Autor           
Michel, Sebastian1, Autor           
Weikum, Gerhard1, Autor           
Bonnet, Philippe, Herausgeber
Manolescu, Ioana, Herausgeber
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: With the growing popularity of information retrieval (IR) in distributed systems and in particular {P2P} Web search, a huge number of protocols and prototypes have been introduced in the literature. However, nearly every paper considers a different benchmark for its experimental evaluation, rendering their mutual comparison and the quantification of performance improvements an impossible task. We present a standardized, general purpose benchmark for {P2P IR} systems that finally makes this possible. We start by presenting a detailed requirement analysis for such a standardized benchmark framework that allows for reproducible and comparable experimental setups without sacrificing flexibility to suit different system models. We further suggest Wikipedia as a publicly-available and all-purpose document corpus and finally introduce a simple but yet flexible clustering strategy that assigns the Wikipedia articles as documents to an arbitrary number of peers. After proposing a standardized, real-world query set as the benchmark workload, we review the metrics to evaluate the benchmark results and present an example benchmark run for our fullyimplemented {P2P} Web search prototype {MINERVA}.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2007-04-272006
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: eDoc: 314435
Anderer: Local-ID: C1256DBF005F876D-B1D251BC8260E5E9C12571B80053D231-NeumannBMW06
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: Untitled Event
Veranstaltungsort: Chicago, Illinois, USA
Start-/Enddatum: 2006-06-30

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Proceedings of the 1st International Workshop on Performance and Evaluation of Data Management Systems, ExpDB 2006, in cooperation with ACM SIGMOD
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: New York, USA : ACM
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 1 - 8 Identifikator: ISBN: 1-59593-463-4