Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Data Quality in Web Archiving

Spaniol, M., Denev, D., Mazeika, A., & Weikum, G. (2009). Data Quality in Web Archiving. In Proceedings of the 3rd Workshop on Information Credibility on the Web (WICOW 2009) in conjunction with the 18th World Wide Web Conference (WWW 2009) (pp. 19-26). New York, NY: ACM.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
p19-spaniolA.pdf (beliebiger Volltext), 658KB
 
Datei-Permalink:
-
Name:
p19-spaniolA.pdf
Beschreibung:
-
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-
Lizenz:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Spaniol, Marc1, Autor           
Denev, Dimitar1, Autor           
Mazeika, Arturas1, Autor           
Weikum, Gerhard1, Autor           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Web archives preserve the history of Web sites and have high long-term value for media and business analysts. Such archives are maintained by periodically re-crawling entire Web sites of interest. From an archivist's point of view, the ideal case to ensure highest possible data quality of the archive would be to ``freeze'' the complete contents of an entire Web site during the time span of crawling and capturing the site. Of course, this is practically infeasible. To comply with the politeness specification of a Web site, the crawler needs to pause between subsequent http requests in order to avoid unduly high load on the site's http server. As a consequence, capturing a large Web site may span hours or even days, which increases the risk that contents collected so far are incoherent with the parts that are still to be crawled. This paper introduces a model for identifying coherent sections of an archive and, thus, measuring the data quality in Web archiving. Additionally, we present a crawling strategy that aims to ensure archive coherence by minimizing the diffusion of Web site captures. Preliminary experiments demonstrate the usefulness of the model and the effectiveness of the strategy.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2010-01-202009
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: eDoc: 520424
URI: http://www.dl.kuis.kyoto-u.ac.jp/wicow3/papers/p19-spaniolA.pdf
Anderer: Local-ID: C1256DBF005F876D-5AFA8025BFAF8A0AC125759900493187-Spaniol-WICOW09
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 3rd Workshop on Information Credibility on the Web
Veranstaltungsort: Madrid, Spain
Start-/Enddatum: 2009-04-20 - 2009-04-20

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Proceedings of the 3rd Workshop on Information Credibility on the Web (WICOW 2009) in conjunction with the 18th World Wide Web Conference (WWW 2009)
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: New York, NY : ACM
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 19 - 26 Identifikator: ISBN: 978-1-60558-488-1