English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  A Reproducible Benchmark for P2P Retrieval

Neumann, T., Bender, M., Michel, S., & Weikum, G. (2006). A Reproducible Benchmark for P2P Retrieval. In Proceedings of the 1st International Workshop on Performance and Evaluation of Data Management Systems, ExpDB 2006, in cooperation with ACM SIGMOD (pp. 1-8). New York, USA: ACM.

Item is

Files

show Files
hide Files
:
NeumannBMW06.pdf (Any fulltext), 160KB
 
File Permalink:
-
Name:
NeumannBMW06.pdf
Description:
-
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Neumann, Thomas1, Author           
Bender, Matthias1, Author           
Michel, Sebastian1, Author           
Weikum, Gerhard1, Author           
Bonnet, Philippe, Editor
Manolescu, Ioana, Editor
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Content

show
hide
Free keywords: -
 Abstract: With the growing popularity of information retrieval (IR) in distributed systems and in particular {P2P} Web search, a huge number of protocols and prototypes have been introduced in the literature. However, nearly every paper considers a different benchmark for its experimental evaluation, rendering their mutual comparison and the quantification of performance improvements an impossible task. We present a standardized, general purpose benchmark for {P2P IR} systems that finally makes this possible. We start by presenting a detailed requirement analysis for such a standardized benchmark framework that allows for reproducible and comparable experimental setups without sacrificing flexibility to suit different system models. We further suggest Wikipedia as a publicly-available and all-purpose document corpus and finally introduce a simple but yet flexible clustering strategy that assigns the Wikipedia articles as documents to an arbitrary number of peers. After proposing a standardized, real-world query set as the benchmark workload, we review the metrics to evaluate the benchmark results and present an example benchmark run for our fullyimplemented {P2P} Web search prototype {MINERVA}.

Details

show
hide
Language(s): eng - English
 Dates: 2007-04-272006
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: eDoc: 314435
Other: Local-ID: C1256DBF005F876D-B1D251BC8260E5E9C12571B80053D231-NeumannBMW06
 Degree: -

Event

show
hide
Title: Untitled Event
Place of Event: Chicago, Illinois, USA
Start-/End Date: 2006-06-30

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 1st International Workshop on Performance and Evaluation of Data Management Systems, ExpDB 2006, in cooperation with ACM SIGMOD
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: New York, USA : ACM
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1 - 8 Identifier: ISBN: 1-59593-463-4