English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Distilling Information Reliability and Source Trustworthiness from Digital Traces

MPS-Authors
/persons/resource/persons144952

Valera,  Isabel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

/persons/resource/persons75510

Gomez Rodriguez,  Manuel
Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1610.07472.pdf
(Preprint), 804KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tabibian, B., Valera, I., Farajtabar, M., Song, L., Schölkopf, B., & Gomez Rodriguez, M. (2016). Distilling Information Reliability and Source Trustworthiness from Digital Traces. doi:10.1145/3038912.3052672.


Cite as: https://hdl.handle.net/11858/00-001M-0000-002D-0658-D
Abstract
Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.