English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Human Computing and Crowdsourcing Methods for Knowledge Acquisition

Kondreddi, S. K. (2014). Human Computing and Crowdsourcing Methods for Knowledge Acquisition. PhD Thesis, Universität des Saarlandes, Saarbrücken.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:
Green
Locator:
http://scidok.sulb.uni-saarland.de/doku/lic_ohne_pod.php?la=de (Copyright transfer agreement)
Description:
-
OA-Status:
Not specified

Creators

show
hide
 Creators:
Kondreddi, Sarath Kumar1, 2, Author           
Triantafillou, Peter1, Advisor           
Berberich, Klaus1, Referee           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              
2International Max Planck Research School, MPI for Informatics, Max Planck Society, Campus E1 4, 66123 Saarbrücken, DE, ou_1116551              

Content

show
hide
Free keywords: -
 Abstract: Ambiguity, complexity, and diversity in natural language textual expressions are major hindrances to automated knowledge extraction. As a result state-of-the-art methods for extracting entities and relationships from unstructured data make incorrect extractions or produce noise. With the advent of human computing, computationally hard tasks have been addressed through human inputs. While text-based knowledge acquisition can benefit from this approach, humans alone cannot bear the burden of extracting knowledge from the vast textual resources that exist today. Even making payments for crowdsourced acquisition can quickly become prohibitively expensive. In this thesis we present principled methods that effectively garner human computing inputs for improving the extraction of knowledge-base facts from natural language texts. Our methods complement automatic extraction techniques with human computing to reap the benefits of both while overcoming each other�s limitations. We present the architecture and implementation of HIGGINS, a system that combines an information extraction (IE) engine with a human computing (HC) engine to produce high quality facts. The IE engine combines statistics derived from large Web corpora with semantic resources like WordNet and ConceptNet to construct a large dictionary of entity and relational phrases. It employs specifically designed statistical language models for phrase relatedness to come up with questions and relevant candidate answers that are presented to human workers. Through extensive experiments we establish the superiority of this approach in extracting relation-centric facts from text. In our experiments we extract facts about fictitious characters in narrative text, where the issues of diversity and complexity in expressing relations are far more pronounced. Finally, we also demonstrate how interesting human computing games can be designed for knowledge acquisition tasks.

Details

show
hide
Language(s): eng - English
 Dates: 2014-05-062014-05-122014
 Publication Status: Issued
 Pages: 116 p.
 Publishing info: Saarbrücken : Universität des Saarlandes
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: Kondreddi2014b
URN: urn:nbn:de:bsz:291-scidok-57948
 Degree: PhD

Event

show

Legal Case

show

Project information

show

Source

show